Test Report: Docker_Linux_crio 21923

                    
                      0ff1edca1acc03f8c3eb691c9cf9caebdbe6133d:2025-11-20:42417
                    
                

Test fail (39/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 14.3
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 148.87
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 46.31
42 TestAddons/parallel/Headlamp 2.55
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 11.2
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 5.27
47 TestAddons/parallel/AmdGpuDevicePlugin 5.26
97 TestFunctional/parallel/ServiceCmdConnect 602.99
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.13
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 424.57
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.7
191 TestJSONOutput/pause/Command 2.37
197 TestJSONOutput/unpause/Command 2.15
277 TestPause/serial/Pause 6
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.57
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.38
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.92
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.35
370 TestStartStop/group/old-k8s-version/serial/Pause 7.38
376 TestStartStop/group/no-preload/serial/Pause 6.31
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.1
383 TestStartStop/group/embed-certs/serial/Pause 6.38
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.01
393 TestStartStop/group/newest-cni/serial/Pause 6.28
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable volcano --alsologtostderr -v=1: exit status 11 (253.453611ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:31:46.213016  263383 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:31:46.213325  263383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:46.213336  263383 out.go:374] Setting ErrFile to fd 2...
	I1120 20:31:46.213342  263383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:46.213542  263383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:31:46.213838  263383 mustload.go:66] Loading cluster: addons-658933
	I1120 20:31:46.214240  263383 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:31:46.214262  263383 addons.go:607] checking whether the cluster is paused
	I1120 20:31:46.214369  263383 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:31:46.214386  263383 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:31:46.214776  263383 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:31:46.233477  263383 ssh_runner.go:195] Run: systemctl --version
	I1120 20:31:46.233546  263383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:31:46.251846  263383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:31:46.347008  263383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:31:46.347092  263383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:31:46.376835  263383 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:31:46.376867  263383 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:31:46.376871  263383 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:31:46.376876  263383 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:31:46.376879  263383 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:31:46.376883  263383 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:31:46.376885  263383 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:31:46.376888  263383 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:31:46.376890  263383 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:31:46.376912  263383 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:31:46.376915  263383 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:31:46.376918  263383 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:31:46.376920  263383 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:31:46.376922  263383 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:31:46.376925  263383 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:31:46.376937  263383 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:31:46.376944  263383 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:31:46.376948  263383 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:31:46.376951  263383 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:31:46.376953  263383 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:31:46.376956  263383 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:31:46.376958  263383 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:31:46.376960  263383 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:31:46.376963  263383 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:31:46.376965  263383 cri.go:89] found id: ""
	I1120 20:31:46.377025  263383 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:31:46.391900  263383 out.go:203] 
	W1120 20:31:46.392972  263383 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:31:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:31:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:31:46.392989  263383 out.go:285] * 
	* 
	W1120 20:31:46.397322  263383 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:31:46.398772  263383 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.292783ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002776024s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004247052s
addons_test.go:392: (dbg) Run:  kubectl --context addons-658933 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-658933 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-658933 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.789932812s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 ip
2025/11/20 20:32:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable registry --alsologtostderr -v=1: exit status 11 (289.968092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:10.338198  265666 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:10.338553  265666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:10.338567  265666 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:10.338574  265666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:10.338900  265666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:10.339204  265666 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:10.339643  265666 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:10.339663  265666 addons.go:607] checking whether the cluster is paused
	I1120 20:32:10.339770  265666 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:10.339785  265666 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:10.340213  265666 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:10.361519  265666 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:10.361587  265666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:10.380399  265666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:10.481100  265666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:10.481231  265666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:10.525212  265666 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:10.525276  265666 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:10.525281  265666 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:10.525285  265666 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:10.525289  265666 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:10.525293  265666 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:10.525298  265666 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:10.525302  265666 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:10.525326  265666 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:10.525346  265666 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:10.525351  265666 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:10.525355  265666 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:10.525359  265666 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:10.525363  265666 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:10.525367  265666 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:10.525378  265666 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:10.525402  265666 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:10.525407  265666 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:10.525411  265666 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:10.525416  265666 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:10.525420  265666 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:10.525425  265666 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:10.525429  265666 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:10.525433  265666 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:10.525437  265666 cri.go:89] found id: ""
	I1120 20:32:10.525498  265666 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:10.545753  265666 out.go:203] 
	W1120 20:32:10.547183  265666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:10.547208  265666 out.go:285] * 
	* 
	W1120 20:32:10.554652  265666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:10.558646  265666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.30s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.865562ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-658933
addons_test.go:332: (dbg) Run:  kubectl --context addons-658933 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (302.286059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:15.557634  266600 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:15.580518  266600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:15.580549  266600 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:15.580557  266600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:15.580943  266600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:15.581448  266600 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:15.582040  266600 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:15.582070  266600 addons.go:607] checking whether the cluster is paused
	I1120 20:32:15.582213  266600 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:15.582249  266600 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:15.582878  266600 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:15.605051  266600 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:15.605107  266600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:15.622786  266600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:15.719165  266600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:15.719284  266600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:15.750816  266600 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:15.750864  266600 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:15.750872  266600 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:15.750879  266600 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:15.750883  266600 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:15.750888  266600 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:15.750892  266600 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:15.750895  266600 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:15.750898  266600 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:15.750913  266600 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:15.750917  266600 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:15.750921  266600 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:15.750926  266600 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:15.750930  266600 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:15.750934  266600 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:15.750952  266600 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:15.750965  266600 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:15.750971  266600 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:15.750975  266600 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:15.750979  266600 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:15.750983  266600 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:15.750986  266600 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:15.750990  266600 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:15.750993  266600 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:15.750997  266600 cri.go:89] found id: ""
	I1120 20:32:15.751059  266600 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:15.767805  266600 out.go:203] 
	W1120 20:32:15.772840  266600 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:15.772871  266600 out.go:285] * 
	* 
	W1120 20:32:15.778414  266600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:15.784416  266600 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-658933 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-658933 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-658933 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a15dc555-dd8e-4662-bc72-9396565e5d42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a15dc555-dd8e-4662-bc72-9396565e5d42] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.0036093s
I1120 20:32:18.350338  254094 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.313490085s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-658933 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-658933
helpers_test.go:243: (dbg) docker inspect addons-658933:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b",
	        "Created": "2025-11-20T20:30:24.502961543Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:30:24.541712996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/hosts",
	        "LogPath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b-json.log",
	        "Name": "/addons-658933",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-658933:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-658933",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b",
	                "LowerDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-658933",
	                "Source": "/var/lib/docker/volumes/addons-658933/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-658933",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-658933",
	                "name.minikube.sigs.k8s.io": "addons-658933",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8017a3fa4ff1d37c03e76184cd5c7dbf9fa32535c90958bcc30111c83a76d350",
	            "SandboxKey": "/var/run/docker/netns/8017a3fa4ff1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-658933": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4f40f7eea169273ab77e6077b611a0da6256676e25bf36be34a5384e5d64e88",
	                    "EndpointID": "a9c9dd9f392edfea3d71e8f8eef5854b3c5fc1703a76651ca67766835fc14d27",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ca:8a:06:a5:f1:e9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-658933",
	                        "3be029a1d6b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-658933 -n addons-658933
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-658933 logs -n 25: (1.168832758s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-393068 --alsologtostderr --binary-mirror http://127.0.0.1:39031 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-393068 │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ delete  │ -p binary-mirror-393068                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-393068 │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ addons  │ disable dashboard -p addons-658933                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-658933                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ start   │ -p addons-658933 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:31 UTC │
	│ addons  │ addons-658933 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	│ addons  │ addons-658933 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-658933 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	│ addons  │ addons-658933 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	│ addons  │ addons-658933 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ ip      │ addons-658933 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │ 20 Nov 25 20:32 UTC │
	│ addons  │ addons-658933 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ ssh     │ addons-658933 ssh cat /opt/local-path-provisioner/pvc-d5c5cd9e-0905-49d9-bb13-e35668184aec_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │ 20 Nov 25 20:32 UTC │
	│ addons  │ addons-658933 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-658933                                                                                                                                                                                                                                                                                                                                                                                           │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │ 20 Nov 25 20:32 UTC │
	│ addons  │ addons-658933 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ ssh     │ addons-658933 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ addons  │ addons-658933 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:32 UTC │                     │
	│ ip      │ addons-658933 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-658933        │ jenkins │ v1.37.0 │ 20 Nov 25 20:34 UTC │ 20 Nov 25 20:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:30:02
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:30:02.705354  255490 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:30:02.705456  255490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:30:02.705461  255490 out.go:374] Setting ErrFile to fd 2...
	I1120 20:30:02.705464  255490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:30:02.705685  255490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:30:02.706192  255490 out.go:368] Setting JSON to false
	I1120 20:30:02.707020  255490 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11545,"bootTime":1763659058,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:30:02.707083  255490 start.go:143] virtualization: kvm guest
	I1120 20:30:02.709012  255490 out.go:179] * [addons-658933] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:30:02.710447  255490 notify.go:221] Checking for updates...
	I1120 20:30:02.710487  255490 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:30:02.711914  255490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:30:02.713304  255490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:30:02.714478  255490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:30:02.715547  255490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:30:02.716631  255490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:30:02.717961  255490 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:30:02.742910  255490 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:30:02.743067  255490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:30:02.803008  255490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-20 20:30:02.793237887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:30:02.803134  255490 docker.go:319] overlay module found
	I1120 20:30:02.804857  255490 out.go:179] * Using the docker driver based on user configuration
	I1120 20:30:02.806259  255490 start.go:309] selected driver: docker
	I1120 20:30:02.806338  255490 start.go:930] validating driver "docker" against <nil>
	I1120 20:30:02.806383  255490 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:30:02.807612  255490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:30:02.873925  255490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-20 20:30:02.863635767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:30:02.874116  255490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:30:02.874369  255490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:30:02.875957  255490 out.go:179] * Using Docker driver with root privileges
	I1120 20:30:02.877277  255490 cni.go:84] Creating CNI manager for ""
	I1120 20:30:02.877346  255490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:30:02.877357  255490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:30:02.877423  255490 start.go:353] cluster config:
	{Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1120 20:30:02.878682  255490 out.go:179] * Starting "addons-658933" primary control-plane node in "addons-658933" cluster
	I1120 20:30:02.879751  255490 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:30:02.880891  255490 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:30:02.881934  255490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:30:02.881968  255490 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:30:02.881982  255490 cache.go:65] Caching tarball of preloaded images
	I1120 20:30:02.882037  255490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:30:02.882096  255490 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:30:02.882111  255490 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:30:02.882518  255490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/config.json ...
	I1120 20:30:02.882552  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/config.json: {Name:mk4543ab9ea947efde347f2a2be95a3ca7691a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:02.899903  255490 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:30:02.900030  255490 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 20:30:02.900057  255490 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 20:30:02.900065  255490 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 20:30:02.900071  255490 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 20:30:02.900077  255490 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1120 20:30:16.112575  255490 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1120 20:30:16.112627  255490 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:30:16.112697  255490 start.go:360] acquireMachinesLock for addons-658933: {Name:mkb5841ba9dc697afe54624d0d76909a3356842e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:30:16.112817  255490 start.go:364] duration metric: took 94.287µs to acquireMachinesLock for "addons-658933"
	I1120 20:30:16.112844  255490 start.go:93] Provisioning new machine with config: &{Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:30:16.112928  255490 start.go:125] createHost starting for "" (driver="docker")
	I1120 20:30:16.114639  255490 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1120 20:30:16.114876  255490 start.go:159] libmachine.API.Create for "addons-658933" (driver="docker")
	I1120 20:30:16.114904  255490 client.go:173] LocalClient.Create starting
	I1120 20:30:16.115032  255490 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 20:30:16.330553  255490 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 20:30:16.534465  255490 cli_runner.go:164] Run: docker network inspect addons-658933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 20:30:16.552562  255490 cli_runner.go:211] docker network inspect addons-658933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 20:30:16.552636  255490 network_create.go:284] running [docker network inspect addons-658933] to gather additional debugging logs...
	I1120 20:30:16.552656  255490 cli_runner.go:164] Run: docker network inspect addons-658933
	W1120 20:30:16.568418  255490 cli_runner.go:211] docker network inspect addons-658933 returned with exit code 1
	I1120 20:30:16.568472  255490 network_create.go:287] error running [docker network inspect addons-658933]: docker network inspect addons-658933: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-658933 not found
	I1120 20:30:16.568485  255490 network_create.go:289] output of [docker network inspect addons-658933]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-658933 not found
	
	** /stderr **
	I1120 20:30:16.568580  255490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:30:16.585933  255490 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0026d0b20}
	I1120 20:30:16.585999  255490 network_create.go:124] attempt to create docker network addons-658933 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1120 20:30:16.586054  255490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-658933 addons-658933
	I1120 20:30:16.634577  255490 network_create.go:108] docker network addons-658933 192.168.49.0/24 created
	I1120 20:30:16.634611  255490 kic.go:121] calculated static IP "192.168.49.2" for the "addons-658933" container
	I1120 20:30:16.634679  255490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 20:30:16.651593  255490 cli_runner.go:164] Run: docker volume create addons-658933 --label name.minikube.sigs.k8s.io=addons-658933 --label created_by.minikube.sigs.k8s.io=true
	I1120 20:30:16.669593  255490 oci.go:103] Successfully created a docker volume addons-658933
	I1120 20:30:16.669690  255490 cli_runner.go:164] Run: docker run --rm --name addons-658933-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-658933 --entrypoint /usr/bin/test -v addons-658933:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 20:30:20.129642  255490 cli_runner.go:217] Completed: docker run --rm --name addons-658933-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-658933 --entrypoint /usr/bin/test -v addons-658933:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (3.459902314s)
	I1120 20:30:20.129679  255490 oci.go:107] Successfully prepared a docker volume addons-658933
	I1120 20:30:20.129720  255490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:30:20.129740  255490 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 20:30:20.129810  255490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-658933:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 20:30:24.432158  255490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-658933:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.302289728s)
	I1120 20:30:24.432198  255490 kic.go:203] duration metric: took 4.302454483s to extract preloaded images to volume ...
	W1120 20:30:24.432344  255490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 20:30:24.432389  255490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 20:30:24.432441  255490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 20:30:24.487095  255490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-658933 --name addons-658933 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-658933 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-658933 --network addons-658933 --ip 192.168.49.2 --volume addons-658933:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 20:30:24.801956  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Running}}
	I1120 20:30:24.820203  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:24.838694  255490 cli_runner.go:164] Run: docker exec addons-658933 stat /var/lib/dpkg/alternatives/iptables
	I1120 20:30:24.888132  255490 oci.go:144] the created container "addons-658933" has a running status.
	I1120 20:30:24.888171  255490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa...
	I1120 20:30:25.070883  255490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 20:30:25.101013  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:25.131534  255490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 20:30:25.131593  255490 kic_runner.go:114] Args: [docker exec --privileged addons-658933 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 20:30:25.184394  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:25.204188  255490 machine.go:94] provisionDockerMachine start ...
	I1120 20:30:25.204326  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.225894  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:25.226203  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:25.226238  255490 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:30:25.362263  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-658933
	
	I1120 20:30:25.362302  255490 ubuntu.go:182] provisioning hostname "addons-658933"
	I1120 20:30:25.362374  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.380874  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:25.381186  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:25.381211  255490 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-658933 && echo "addons-658933" | sudo tee /etc/hostname
	I1120 20:30:25.523323  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-658933
	
	I1120 20:30:25.523395  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.541855  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:25.542077  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:25.542093  255490 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-658933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-658933/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-658933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:30:25.674264  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:30:25.674298  255490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:30:25.674316  255490 ubuntu.go:190] setting up certificates
	I1120 20:30:25.674325  255490 provision.go:84] configureAuth start
	I1120 20:30:25.674387  255490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-658933
	I1120 20:30:25.692070  255490 provision.go:143] copyHostCerts
	I1120 20:30:25.692159  255490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:30:25.692306  255490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:30:25.692378  255490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:30:25.692434  255490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.addons-658933 san=[127.0.0.1 192.168.49.2 addons-658933 localhost minikube]
	I1120 20:30:25.918654  255490 provision.go:177] copyRemoteCerts
	I1120 20:30:25.918727  255490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:30:25.918764  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.936658  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.031611  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:30:26.051080  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:30:26.068014  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:30:26.085192  255490 provision.go:87] duration metric: took 410.851336ms to configureAuth
	I1120 20:30:26.085234  255490 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:30:26.085415  255490 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:30:26.085525  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.103290  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:26.103498  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:26.103516  255490 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:30:26.379766  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:30:26.379794  255490 machine.go:97] duration metric: took 1.175562439s to provisionDockerMachine
	I1120 20:30:26.379804  255490 client.go:176] duration metric: took 10.264891227s to LocalClient.Create
	I1120 20:30:26.379823  255490 start.go:167] duration metric: took 10.264948521s to libmachine.API.Create "addons-658933"
	I1120 20:30:26.379848  255490 start.go:293] postStartSetup for "addons-658933" (driver="docker")
	I1120 20:30:26.379857  255490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:30:26.379911  255490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:30:26.379951  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.398162  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.494237  255490 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:30:26.497838  255490 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:30:26.497867  255490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:30:26.497880  255490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:30:26.497937  255490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:30:26.497971  255490 start.go:296] duration metric: took 118.116765ms for postStartSetup
	I1120 20:30:26.498281  255490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-658933
	I1120 20:30:26.515591  255490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/config.json ...
	I1120 20:30:26.515956  255490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:30:26.516010  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.533439  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.625724  255490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:30:26.630380  255490 start.go:128] duration metric: took 10.517434615s to createHost
	I1120 20:30:26.630412  255490 start.go:83] releasing machines lock for "addons-658933", held for 10.517579755s
	I1120 20:30:26.630484  255490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-658933
	I1120 20:30:26.648987  255490 ssh_runner.go:195] Run: cat /version.json
	I1120 20:30:26.649049  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.649058  255490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:30:26.649121  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.667742  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.667789  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.817535  255490 ssh_runner.go:195] Run: systemctl --version
	I1120 20:30:26.823827  255490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:30:26.858465  255490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:30:26.863084  255490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:30:26.863146  255490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:30:26.888392  255490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:30:26.888415  255490 start.go:496] detecting cgroup driver to use...
	I1120 20:30:26.888448  255490 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:30:26.888496  255490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:30:26.903744  255490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:30:26.915977  255490 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:30:26.916037  255490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:30:26.932151  255490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:30:26.949454  255490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:30:27.029821  255490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:30:27.117850  255490 docker.go:234] disabling docker service ...
	I1120 20:30:27.117908  255490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:30:27.136083  255490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:30:27.148242  255490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:30:27.228284  255490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:30:27.308696  255490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:30:27.321107  255490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:30:27.335041  255490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:30:27.335120  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.345736  255490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:30:27.345814  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.354928  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.363478  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.372559  255490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:30:27.380575  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.389050  255490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.402164  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.410501  255490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:30:27.417624  255490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:30:27.424692  255490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:30:27.505005  255490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:30:27.637887  255490 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:30:27.637966  255490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:30:27.642010  255490 start.go:564] Will wait 60s for crictl version
	I1120 20:30:27.642066  255490 ssh_runner.go:195] Run: which crictl
	I1120 20:30:27.645848  255490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:30:27.669633  255490 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:30:27.669735  255490 ssh_runner.go:195] Run: crio --version
	I1120 20:30:27.697787  255490 ssh_runner.go:195] Run: crio --version
	I1120 20:30:27.725654  255490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:30:27.726985  255490 cli_runner.go:164] Run: docker network inspect addons-658933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:30:27.744591  255490 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:30:27.748624  255490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:30:27.758674  255490 kubeadm.go:884] updating cluster {Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:30:27.758801  255490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:30:27.758861  255490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:30:27.790387  255490 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:30:27.790410  255490 crio.go:433] Images already preloaded, skipping extraction
	I1120 20:30:27.790456  255490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:30:27.815605  255490 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:30:27.815630  255490 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:30:27.815638  255490 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 20:30:27.815730  255490 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-658933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:30:27.815794  255490 ssh_runner.go:195] Run: crio config
	I1120 20:30:27.859258  255490 cni.go:84] Creating CNI manager for ""
	I1120 20:30:27.859277  255490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:30:27.859299  255490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:30:27.859335  255490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-658933 NodeName:addons-658933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:30:27.859496  255490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-658933"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:30:27.859572  255490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:30:27.867802  255490 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:30:27.867888  255490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:30:27.875613  255490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:30:27.887750  255490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:30:27.902065  255490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1120 20:30:27.914385  255490 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1120 20:30:27.918098  255490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:30:27.927525  255490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:30:28.008342  255490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:30:28.036018  255490 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933 for IP: 192.168.49.2
	I1120 20:30:28.036041  255490 certs.go:195] generating shared ca certs ...
	I1120 20:30:28.036057  255490 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:28.036178  255490 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:30:28.466205  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt ...
	I1120 20:30:28.466244  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt: {Name:mk6f97ec9583eb89bfd69ef395c34ff3ea55f3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:28.466473  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key ...
	I1120 20:30:28.466491  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key: {Name:mk3e7e29a295b7f6ffe6a8667dd55d70340288c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:28.466634  255490 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:30:29.048529  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt ...
	I1120 20:30:29.048572  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt: {Name:mk52c606e9c73320afcb1e218858dd869c111ce4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.048820  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key ...
	I1120 20:30:29.048841  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key: {Name:mk9df92e14340f94ea29b58a77daf340bce4f983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.048963  255490 certs.go:257] generating profile certs ...
	I1120 20:30:29.049050  255490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.key
	I1120 20:30:29.049069  255490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt with IP's: []
	I1120 20:30:29.301284  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt ...
	I1120 20:30:29.301317  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: {Name:mkd85bb771caeeaf317adc1d90008b021a4c8bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.301534  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.key ...
	I1120 20:30:29.301552  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.key: {Name:mk9eaffcdf9a4d8d133011c84aa665656203b92b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.301667  255490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8
	I1120 20:30:29.301697  255490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1120 20:30:29.765881  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8 ...
	I1120 20:30:29.765924  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8: {Name:mkdf619e1dbbaac171eec8a1e6b70761a2885c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.766158  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8 ...
	I1120 20:30:29.766180  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8: {Name:mk5e4e602db673c2658bfd554a33054d1ef58bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.766321  255490 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt
	I1120 20:30:29.766443  255490 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key
	I1120 20:30:29.766534  255490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key
	I1120 20:30:29.766570  255490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt with IP's: []
	I1120 20:30:29.934993  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt ...
	I1120 20:30:29.935036  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt: {Name:mk5f4d6904f51630b72384744580871b1ec140f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.935267  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key ...
	I1120 20:30:29.935286  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key: {Name:mkfe77cc656afbd3ae5eab9d2a938dae5a390e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.935493  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:30:29.935534  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:30:29.935573  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:30:29.935604  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:30:29.936350  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:30:29.955079  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:30:29.972691  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:30:29.990062  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:30:30.007727  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:30:30.024466  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:30:30.040969  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:30:30.057938  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 20:30:30.074469  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:30:30.092814  255490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:30:30.104853  255490 ssh_runner.go:195] Run: openssl version
	I1120 20:30:30.110923  255490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.117948  255490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:30:30.127491  255490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.131199  255490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.131268  255490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.165031  255490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:30:30.172953  255490 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:30:30.180264  255490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:30:30.183762  255490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:30:30.183816  255490 kubeadm.go:401] StartCluster: {Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:30:30.183910  255490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:30:30.183958  255490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:30:30.211077  255490 cri.go:89] found id: ""
	I1120 20:30:30.211141  255490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:30:30.219578  255490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:30:30.227183  255490 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 20:30:30.227256  255490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:30:30.234883  255490 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:30:30.234900  255490 kubeadm.go:158] found existing configuration files:
	
	I1120 20:30:30.234941  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:30:30.242205  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:30:30.242279  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:30:30.249521  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:30:30.256865  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:30:30.256931  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:30:30.263941  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:30:30.271617  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:30:30.271690  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:30:30.280239  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:30:30.288812  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:30:30.288887  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:30:30.297096  255490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 20:30:30.359287  255490 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 20:30:30.418534  255490 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:30:39.395515  255490 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:30:39.395629  255490 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:30:39.395738  255490 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 20:30:39.395841  255490 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 20:30:39.395876  255490 kubeadm.go:319] OS: Linux
	I1120 20:30:39.395926  255490 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 20:30:39.395979  255490 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 20:30:39.396027  255490 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 20:30:39.396070  255490 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 20:30:39.396118  255490 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 20:30:39.396195  255490 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 20:30:39.396324  255490 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 20:30:39.396366  255490 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 20:30:39.396463  255490 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:30:39.396640  255490 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:30:39.396765  255490 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:30:39.396857  255490 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:30:39.398308  255490 out.go:252]   - Generating certificates and keys ...
	I1120 20:30:39.398397  255490 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:30:39.398507  255490 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:30:39.398606  255490 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:30:39.398692  255490 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:30:39.398771  255490 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:30:39.398841  255490 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:30:39.398917  255490 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:30:39.399051  255490 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-658933 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 20:30:39.399094  255490 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:30:39.399205  255490 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-658933 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 20:30:39.399291  255490 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:30:39.399359  255490 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:30:39.399398  255490 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:30:39.399450  255490 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:30:39.399494  255490 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:30:39.399543  255490 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:30:39.399596  255490 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:30:39.399654  255490 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:30:39.399701  255490 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:30:39.399770  255490 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:30:39.399832  255490 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:30:39.400997  255490 out.go:252]   - Booting up control plane ...
	I1120 20:30:39.401080  255490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:30:39.401150  255490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:30:39.401240  255490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:30:39.401382  255490 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:30:39.401464  255490 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:30:39.401558  255490 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:30:39.401650  255490 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:30:39.401720  255490 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:30:39.401886  255490 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:30:39.402022  255490 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:30:39.402093  255490 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.91805ms
	I1120 20:30:39.402235  255490 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:30:39.402312  255490 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1120 20:30:39.402399  255490 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:30:39.402468  255490 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:30:39.402531  255490 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.521604309s
	I1120 20:30:39.402589  255490 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.317842825s
	I1120 20:30:39.402653  255490 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001491631s
	I1120 20:30:39.402743  255490 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:30:39.402899  255490 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:30:39.402998  255490 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:30:39.403204  255490 kubeadm.go:319] [mark-control-plane] Marking the node addons-658933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:30:39.403305  255490 kubeadm.go:319] [bootstrap-token] Using token: 3wjd0t.465tl4dd1yvzdt5n
	I1120 20:30:39.405321  255490 out.go:252]   - Configuring RBAC rules ...
	I1120 20:30:39.405460  255490 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:30:39.405565  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:30:39.405724  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:30:39.405836  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:30:39.405968  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:30:39.406063  255490 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:30:39.406160  255490 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:30:39.406241  255490 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:30:39.406298  255490 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:30:39.406307  255490 kubeadm.go:319] 
	I1120 20:30:39.406386  255490 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:30:39.406395  255490 kubeadm.go:319] 
	I1120 20:30:39.406527  255490 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:30:39.406547  255490 kubeadm.go:319] 
	I1120 20:30:39.406582  255490 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:30:39.406664  255490 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:30:39.406724  255490 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:30:39.406732  255490 kubeadm.go:319] 
	I1120 20:30:39.406792  255490 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:30:39.406800  255490 kubeadm.go:319] 
	I1120 20:30:39.406842  255490 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:30:39.406851  255490 kubeadm.go:319] 
	I1120 20:30:39.406905  255490 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:30:39.406995  255490 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:30:39.407055  255490 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:30:39.407058  255490 kubeadm.go:319] 
	I1120 20:30:39.407127  255490 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:30:39.407211  255490 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:30:39.407231  255490 kubeadm.go:319] 
	I1120 20:30:39.407300  255490 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3wjd0t.465tl4dd1yvzdt5n \
	I1120 20:30:39.407394  255490 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d \
	I1120 20:30:39.407422  255490 kubeadm.go:319] 	--control-plane 
	I1120 20:30:39.407426  255490 kubeadm.go:319] 
	I1120 20:30:39.407548  255490 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:30:39.407558  255490 kubeadm.go:319] 
	I1120 20:30:39.407653  255490 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3wjd0t.465tl4dd1yvzdt5n \
	I1120 20:30:39.407800  255490 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d 
	I1120 20:30:39.407814  255490 cni.go:84] Creating CNI manager for ""
	I1120 20:30:39.407822  255490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:30:39.409798  255490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 20:30:39.410818  255490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 20:30:39.415289  255490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 20:30:39.415307  255490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 20:30:39.428329  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 20:30:39.631760  255490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:30:39.631846  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:39.631940  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-658933 minikube.k8s.io/updated_at=2025_11_20T20_30_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-658933 minikube.k8s.io/primary=true
	I1120 20:30:39.716969  255490 ops.go:34] apiserver oom_adj: -16
	I1120 20:30:39.717090  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:40.218175  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:40.717186  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:41.218026  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:41.717832  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:42.218081  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:42.717840  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:43.217376  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:43.718191  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:44.217589  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:44.287544  255490 kubeadm.go:1114] duration metric: took 4.655753879s to wait for elevateKubeSystemPrivileges
	I1120 20:30:44.287587  255490 kubeadm.go:403] duration metric: took 14.103777939s to StartCluster
	I1120 20:30:44.287615  255490 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:44.287768  255490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:30:44.288172  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:44.288399  255490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:30:44.288427  255490 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:30:44.288497  255490 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:30:44.288644  255490 addons.go:70] Setting yakd=true in profile "addons-658933"
	I1120 20:30:44.288669  255490 addons.go:239] Setting addon yakd=true in "addons-658933"
	I1120 20:30:44.288697  255490 addons.go:70] Setting registry-creds=true in profile "addons-658933"
	I1120 20:30:44.288696  255490 addons.go:70] Setting inspektor-gadget=true in profile "addons-658933"
	I1120 20:30:44.288723  255490 addons.go:239] Setting addon registry-creds=true in "addons-658933"
	I1120 20:30:44.288725  255490 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-658933"
	I1120 20:30:44.288725  255490 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:30:44.288720  255490 addons.go:70] Setting default-storageclass=true in profile "addons-658933"
	I1120 20:30:44.288739  255490 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-658933"
	I1120 20:30:44.288745  255490 addons.go:70] Setting metrics-server=true in profile "addons-658933"
	I1120 20:30:44.288753  255490 addons.go:70] Setting ingress=true in profile "addons-658933"
	I1120 20:30:44.288754  255490 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-658933"
	I1120 20:30:44.288706  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.288763  255490 addons.go:239] Setting addon metrics-server=true in "addons-658933"
	I1120 20:30:44.288771  255490 addons.go:70] Setting ingress-dns=true in profile "addons-658933"
	I1120 20:30:44.288783  255490 addons.go:239] Setting addon ingress-dns=true in "addons-658933"
	I1120 20:30:44.288796  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.288813  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.289160  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289170  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289349  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289352  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289462  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.288745  255490 addons.go:70] Setting gcp-auth=true in profile "addons-658933"
	I1120 20:30:44.289623  255490 mustload.go:66] Loading cluster: addons-658933
	I1120 20:30:44.289778  255490 addons.go:70] Setting volcano=true in profile "addons-658933"
	I1120 20:30:44.289796  255490 addons.go:239] Setting addon volcano=true in "addons-658933"
	I1120 20:30:44.289815  255490 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:30:44.289828  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.289849  255490 addons.go:70] Setting cloud-spanner=true in profile "addons-658933"
	I1120 20:30:44.289871  255490 addons.go:239] Setting addon cloud-spanner=true in "addons-658933"
	I1120 20:30:44.289913  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.290072  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.290288  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.290429  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.288764  255490 addons.go:239] Setting addon ingress=true in "addons-658933"
	I1120 20:30:44.293080  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.288716  255490 addons.go:70] Setting storage-provisioner=true in profile "addons-658933"
	I1120 20:30:44.293429  255490 addons.go:239] Setting addon storage-provisioner=true in "addons-658933"
	I1120 20:30:44.293472  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.293643  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.294012  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.288757  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.290446  255490 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-658933"
	I1120 20:30:44.290464  255490 addons.go:70] Setting volumesnapshots=true in profile "addons-658933"
	I1120 20:30:44.290480  255490 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-658933"
	I1120 20:30:44.290492  255490 addons.go:70] Setting registry=true in profile "addons-658933"
	I1120 20:30:44.288728  255490 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-658933"
	I1120 20:30:44.288733  255490 addons.go:239] Setting addon inspektor-gadget=true in "addons-658933"
	I1120 20:30:44.291508  255490 out.go:179] * Verifying Kubernetes components...
	I1120 20:30:44.294358  255490 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-658933"
	I1120 20:30:44.295346  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.294986  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.296198  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.295055  255490 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-658933"
	I1120 20:30:44.301281  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.301333  255490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:30:44.295067  255490 addons.go:239] Setting addon volumesnapshots=true in "addons-658933"
	I1120 20:30:44.301581  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.295079  255490 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-658933"
	I1120 20:30:44.301684  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.301761  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.302133  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.302169  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.295109  255490 addons.go:239] Setting addon registry=true in "addons-658933"
	I1120 20:30:44.302256  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.295145  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.305972  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.306928  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.340707  255490 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:30:44.342975  255490 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:30:44.343000  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1120 20:30:44.343094  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.356746  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:30:44.359069  255490 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:30:44.360315  255490 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:30:44.360315  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:30:44.361396  255490 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:30:44.361419  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:30:44.361477  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.363554  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:30:44.363898  255490 addons.go:239] Setting addon default-storageclass=true in "addons-658933"
	I1120 20:30:44.363946  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.365235  255490 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:30:44.365256  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:30:44.365314  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.369630  255490 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:30:44.370336  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.372298  255490 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:30:44.372322  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:30:44.372380  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.372856  255490 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:30:44.372872  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:30:44.372920  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.373595  255490 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:30:44.383200  255490 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-658933"
	I1120 20:30:44.389843  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.383802  255490 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:30:44.390391  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.384799  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:30:44.390434  255490 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:30:44.390504  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.394277  255490 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:30:44.397389  255490 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:30:44.397755  255490 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:30:44.398687  255490 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:30:44.398712  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:30:44.398779  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.398975  255490 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:30:44.399013  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:30:44.399093  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.399320  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:30:44.399332  255490 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:30:44.399377  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.399580  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:30:44.400624  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:30:44.400651  255490 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	W1120 20:30:44.400662  255490 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:30:44.400715  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.404800  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.420607  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:30:44.422936  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:30:44.424248  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:30:44.425355  255490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:30:44.426548  255490 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:30:44.426569  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:30:44.426635  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.426845  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:30:44.430771  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:30:44.430933  255490 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:30:44.430998  255490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:30:44.431246  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.433803  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:30:44.434043  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.437710  255490 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:30:44.439018  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:30:44.439142  255490 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:30:44.439156  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:30:44.439224  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.443476  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.443544  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.444710  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:30:44.445404  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.445815  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:30:44.445834  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:30:44.445901  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.466449  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.466798  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.466848  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.470817  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.473356  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.476614  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.481996  255490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:30:44.482475  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.485242  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.485283  255490 retry.go:31] will retry after 155.42709ms: ssh: handshake failed: EOF
	I1120 20:30:44.493808  255490 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:30:44.494404  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.496714  255490 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:30:44.498797  255490 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:30:44.498881  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:30:44.498977  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.504727  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.506552  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.506894  255490 retry.go:31] will retry after 201.636658ms: ssh: handshake failed: EOF
	I1120 20:30:44.516460  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.517611  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.517692  255490 retry.go:31] will retry after 374.650461ms: ssh: handshake failed: EOF
	I1120 20:30:44.518569  255490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:30:44.538611  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.540447  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.540483  255490 retry.go:31] will retry after 337.040085ms: ssh: handshake failed: EOF
	I1120 20:30:44.608085  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:30:44.625959  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:30:44.636133  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:30:44.648083  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:30:44.648111  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:30:44.664850  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:30:44.664884  255490 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:30:44.665052  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:30:44.665734  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:30:44.669274  255490 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:30:44.669297  255490 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:30:44.681348  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:30:44.681381  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:30:44.687355  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:30:44.688633  255490 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:30:44.688657  255490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:30:44.693958  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:30:44.693979  255490 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:30:44.703806  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:30:44.703904  255490 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:30:44.705656  255490 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:30:44.705675  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:30:44.717708  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:30:44.717807  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:30:44.728532  255490 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:30:44.728627  255490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:30:44.739499  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:30:44.740657  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:30:44.740678  255490 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:30:44.749150  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:30:44.749183  255490 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:30:44.760005  255490 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:30:44.760043  255490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:30:44.765594  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:30:44.765621  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:30:44.779557  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:30:44.779594  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:30:44.787155  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:30:44.793924  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:30:44.794021  255490 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:30:44.803069  255490 node_ready.go:35] waiting up to 6m0s for node "addons-658933" to be "Ready" ...
	I1120 20:30:44.803646  255490 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1120 20:30:44.806578  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:30:44.806600  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:30:44.818564  255490 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:30:44.818590  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:30:44.840236  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:30:44.875691  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:30:44.875719  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:30:44.886668  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:30:44.901027  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:30:44.921529  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:30:44.945247  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:30:44.945346  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:30:45.038393  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:30:45.038486  255490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:30:45.110033  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:30:45.110059  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:30:45.112358  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:30:45.118049  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:30:45.149133  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:30:45.149168  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:30:45.194796  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:30:45.194850  255490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:30:45.252975  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:30:45.313308  255490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-658933" context rescaled to 1 replicas
	I1120 20:30:45.956010  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.319832842s)
	I1120 20:30:45.956123  255490 addons.go:480] Verifying addon ingress=true in "addons-658933"
	I1120 20:30:45.956235  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.290454836s)
	I1120 20:30:45.956366  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.216841507s)
	I1120 20:30:45.956382  255490 addons.go:480] Verifying addon registry=true in "addons-658933"
	I1120 20:30:45.956137  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.291050342s)
	I1120 20:30:45.956331  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.268950251s)
	I1120 20:30:45.956530  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169270174s)
	I1120 20:30:45.956545  255490 addons.go:480] Verifying addon metrics-server=true in "addons-658933"
	I1120 20:30:45.956604  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.115169622s)
	I1120 20:30:45.957854  255490 out.go:179] * Verifying ingress addon...
	I1120 20:30:45.957888  255490 out.go:179] * Verifying registry addon...
	I1120 20:30:45.960418  255490 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-658933 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:30:45.962275  255490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:30:45.962275  255490 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:30:45.965232  255490 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:30:45.965315  255490 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:30:45.965329  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:46.371380  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.484574379s)
	W1120 20:30:46.371426  255490 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:30:46.371453  255490 retry.go:31] will retry after 250.434733ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:30:46.371449  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.470280942s)
	I1120 20:30:46.371497  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.449929791s)
	I1120 20:30:46.371546  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.259163755s)
	I1120 20:30:46.371589  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.253515744s)
	I1120 20:30:46.371836  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.118810669s)
	I1120 20:30:46.371861  255490 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-658933"
	I1120 20:30:46.373188  255490 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:30:46.375253  255490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:30:46.378188  255490 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:30:46.378209  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 20:30:46.380074  255490 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1120 20:30:46.479294  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:46.479401  255490 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:30:46.479415  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:46.622528  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1120 20:30:46.806760  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:46.878871  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:46.965738  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:46.965939  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:47.378658  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:47.479446  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:47.479516  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:47.878841  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:47.965550  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:47.965638  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:48.378538  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:48.478921  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:48.479116  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:48.879357  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:48.965910  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:48.966139  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:49.113049  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.490474653s)
	W1120 20:30:49.306531  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:49.379072  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:49.479567  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:49.479719  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:49.878714  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:49.965738  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:49.965738  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:50.378645  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:50.479558  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:50.479780  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:50.877983  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:50.965503  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:50.965728  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1120 20:30:51.306808  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:51.379107  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:51.480176  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:51.480376  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:51.878787  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:51.965534  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:51.965747  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:52.012397  255490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:30:52.012456  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:52.031008  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:52.139277  255490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:30:52.152717  255490 addons.go:239] Setting addon gcp-auth=true in "addons-658933"
	I1120 20:30:52.152768  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:52.153133  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:52.170737  255490 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:30:52.170801  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:52.188332  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:52.280947  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:30:52.282264  255490 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:30:52.283387  255490 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:30:52.283411  255490 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:30:52.297064  255490 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:30:52.297098  255490 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:30:52.310540  255490 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:30:52.310560  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:30:52.322756  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:30:52.379105  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:52.465771  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:52.465961  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:52.633423  255490 addons.go:480] Verifying addon gcp-auth=true in "addons-658933"
	I1120 20:30:52.635628  255490 out.go:179] * Verifying gcp-auth addon...
	I1120 20:30:52.637645  255490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:30:52.640152  255490 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:30:52.640173  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:52.879120  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:52.965912  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:52.966120  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:53.140649  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:53.378517  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:53.465361  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:53.465532  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:53.640958  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1120 20:30:53.807014  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:53.878808  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:53.965738  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:53.965765  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:54.140536  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:54.378571  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:54.465306  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:54.465508  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:54.641426  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:54.878834  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:54.965886  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:54.965948  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:55.140886  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:55.378515  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:55.465729  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:55.465788  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:55.643587  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:55.805975  255490 node_ready.go:49] node "addons-658933" is "Ready"
	I1120 20:30:55.806014  255490 node_ready.go:38] duration metric: took 11.00289094s for node "addons-658933" to be "Ready" ...
	I1120 20:30:55.806034  255490 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:30:55.806097  255490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:30:55.822010  255490 api_server.go:72] duration metric: took 11.533542492s to wait for apiserver process to appear ...
	I1120 20:30:55.822037  255490 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:30:55.822067  255490 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:30:55.826210  255490 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:30:55.827161  255490 api_server.go:141] control plane version: v1.34.1
	I1120 20:30:55.827186  255490 api_server.go:131] duration metric: took 5.142237ms to wait for apiserver health ...
	I1120 20:30:55.827197  255490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:30:55.831306  255490 system_pods.go:59] 20 kube-system pods found
	I1120 20:30:55.831341  255490 system_pods.go:61] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending
	I1120 20:30:55.831354  255490 system_pods.go:61] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:55.831363  255490 system_pods.go:61] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:55.831373  255490 system_pods.go:61] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:55.831382  255490 system_pods.go:61] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:55.831394  255490 system_pods.go:61] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:55.831400  255490 system_pods.go:61] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:55.831405  255490 system_pods.go:61] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:55.831409  255490 system_pods.go:61] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:55.831417  255490 system_pods.go:61] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:55.831422  255490 system_pods.go:61] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:55.831427  255490 system_pods.go:61] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:55.831436  255490 system_pods.go:61] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:55.831444  255490 system_pods.go:61] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:55.831451  255490 system_pods.go:61] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:55.831459  255490 system_pods.go:61] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:55.831465  255490 system_pods.go:61] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending
	I1120 20:30:55.831472  255490 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:55.831481  255490 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending
	I1120 20:30:55.831489  255490 system_pods.go:61] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:55.831498  255490 system_pods.go:74] duration metric: took 4.293354ms to wait for pod list to return data ...
	I1120 20:30:55.831510  255490 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:30:55.833792  255490 default_sa.go:45] found service account: "default"
	I1120 20:30:55.833810  255490 default_sa.go:55] duration metric: took 2.294915ms for default service account to be created ...
	I1120 20:30:55.833818  255490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:30:55.837093  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:55.837126  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending
	I1120 20:30:55.837141  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:55.837151  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:55.837163  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:55.837173  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:55.837186  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:55.837195  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:55.837202  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:55.837209  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:55.837231  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:55.837241  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:55.837255  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:55.837265  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:55.837274  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:55.837297  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:55.837309  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:55.837322  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending
	I1120 20:30:55.837331  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:55.837341  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending
	I1120 20:30:55.837350  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:55.837371  255490 retry.go:31] will retry after 233.691892ms: missing components: kube-dns
	I1120 20:30:55.878700  255490 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:30:55.878723  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:55.979841  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:55.980337  255490 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:30:55.980356  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:56.082886  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:56.082932  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:56.082942  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:56.082953  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:56.082960  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:56.082970  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:56.082977  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:56.082984  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:56.082991  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:56.082996  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:56.083005  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:56.083011  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:56.083017  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:56.083030  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:56.083041  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:56.083054  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:56.083062  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:56.083069  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:56.083081  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.083091  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.083099  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:56.083123  255490 retry.go:31] will retry after 284.764079ms: missing components: kube-dns
	I1120 20:30:56.179618  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:56.372482  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:56.372515  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:56.372524  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:56.372530  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:56.372536  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:56.372542  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:56.372546  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:56.372550  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:56.372554  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:56.372557  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:56.372562  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:56.372566  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:56.372569  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:56.372575  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:56.372582  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:56.372587  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:56.372593  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:56.372599  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:56.372606  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.372612  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.372620  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:56.372635  255490 retry.go:31] will retry after 300.095602ms: missing components: kube-dns
	I1120 20:30:56.377851  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:56.465605  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:56.465629  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:56.642003  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:56.677833  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:56.677877  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:56.677890  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:56.677902  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:56.677910  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:56.677919  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:56.677924  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:56.677932  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:56.677941  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:56.677947  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:56.677958  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:56.677969  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:56.677982  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:56.677991  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:56.678004  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:56.678012  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:56.678022  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:56.678029  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:56.678040  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.678050  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.678060  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:56.678082  255490 retry.go:31] will retry after 380.404175ms: missing components: kube-dns
	I1120 20:30:56.880262  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:56.981309  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:56.981431  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:57.064071  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:57.064112  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:57.064120  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Running
	I1120 20:30:57.064132  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:57.064140  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:57.064171  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:57.064182  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:57.064189  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:57.064198  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:57.064204  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:57.064225  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:57.064234  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:57.064241  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:57.064250  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:57.064260  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:57.064271  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:57.064280  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:57.064292  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:57.064301  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:57.064313  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:57.064319  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Running
	I1120 20:30:57.064334  255490 system_pods.go:126] duration metric: took 1.230509144s to wait for k8s-apps to be running ...
	I1120 20:30:57.064347  255490 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:30:57.064407  255490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:30:57.080958  255490 system_svc.go:56] duration metric: took 16.60059ms WaitForService to wait for kubelet
	I1120 20:30:57.080993  255490 kubeadm.go:587] duration metric: took 12.79253115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:30:57.081019  255490 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:30:57.084381  255490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:30:57.084414  255490 node_conditions.go:123] node cpu capacity is 8
	I1120 20:30:57.084433  255490 node_conditions.go:105] duration metric: took 3.407997ms to run NodePressure ...
	I1120 20:30:57.084450  255490 start.go:242] waiting for startup goroutines ...
	I1120 20:30:57.141322  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:57.378668  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:57.465749  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:57.465796  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:57.641452  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:57.878664  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:57.965359  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:57.965407  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:58.141706  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:58.379207  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:58.467832  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:58.467859  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:58.640995  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:58.879536  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:58.966611  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:58.966985  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:59.141696  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:59.379345  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:59.466875  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:59.466957  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:59.643733  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:59.879237  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:59.966151  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:59.966369  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:00.141266  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:00.378693  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:00.465580  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:00.465615  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:00.641643  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:00.878740  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:00.965814  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:00.965899  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:01.142276  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:01.379527  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:01.467961  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:01.468294  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:01.641502  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:01.879106  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:01.967199  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:01.967396  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:02.141394  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:02.379607  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:02.465725  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:02.465784  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:02.641757  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:02.878862  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:02.965741  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:02.965815  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:03.140943  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:03.379765  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:03.465982  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:03.466022  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:03.641315  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:03.878974  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:03.966371  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:03.966442  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:04.142603  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:04.380193  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:04.466610  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:04.466692  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:04.641125  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:05.012082  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:05.012104  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:05.012239  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:05.140693  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:05.379258  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:05.466043  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:05.466135  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:05.640982  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:05.879539  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:05.965482  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:05.965547  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:06.141057  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:06.379432  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:06.466122  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:06.466259  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:06.641283  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:06.880151  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:06.965996  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:06.966027  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:07.141428  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:07.378588  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:07.465446  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:07.465490  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:07.640922  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:07.880403  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:07.966163  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:07.966318  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:08.141588  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:08.379578  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:08.465745  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:08.466030  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:08.641822  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:08.879513  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:08.965627  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:08.965694  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:09.141852  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:09.379966  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:09.481265  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:09.481365  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:09.641578  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:09.879130  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:09.966006  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:09.966015  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:10.141002  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:10.379779  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:10.466077  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:10.466123  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:10.641159  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:10.981688  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:10.981745  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:10.981991  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:11.142443  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:11.379915  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:11.466110  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:11.466693  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:11.641774  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:11.879258  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:11.966374  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:11.966402  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:12.141454  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:12.379441  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:12.466661  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:12.466680  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:12.640818  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:12.879836  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:12.965465  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:12.965568  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:13.141956  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:13.379443  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:13.465856  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:13.465856  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:13.640998  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:13.879576  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:13.965419  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:13.965417  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:14.141470  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:14.379742  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:14.465858  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:14.465919  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:14.641058  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:14.953461  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:14.976705  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:14.976924  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:15.141842  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:15.379282  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:15.466823  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:15.466802  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:15.640824  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:15.879438  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:15.967509  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:15.967576  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:16.141701  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:16.379778  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:16.465878  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:16.466005  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:16.641690  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:16.879420  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:16.966078  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:16.966335  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:17.141049  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:17.380060  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:17.466196  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:17.466238  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:17.641250  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:17.878716  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:17.966197  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:17.966365  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:18.141346  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:18.380185  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:18.466366  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:18.466433  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:18.641704  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:18.879423  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:18.980124  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:18.980363  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:19.141798  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:19.379365  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:19.465999  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:19.466028  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:19.640945  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:19.879387  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:19.966645  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:19.966849  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:20.141149  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:20.379959  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:20.545520  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:20.545599  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:20.727863  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:20.878770  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:20.965345  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:20.965544  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:21.141504  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:21.378988  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:21.466059  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:21.466194  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:21.641759  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:21.878636  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:21.965347  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:21.965394  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:22.140906  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:22.379120  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:22.466066  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:22.466186  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:22.641549  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:22.879495  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:22.966302  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:22.966306  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:23.140907  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:23.379414  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:23.466100  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:23.466289  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:23.640878  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:23.879445  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:23.966398  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:23.966451  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:24.141406  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:24.379490  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:24.468461  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:24.468667  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:24.641051  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:24.879994  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:24.966362  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:24.966480  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:25.141787  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:25.378802  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:25.466975  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:25.467055  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:25.640975  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:25.942565  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:25.966318  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:25.966506  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:26.141449  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:26.378591  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:26.465790  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:26.465796  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:26.640672  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:26.878851  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:26.966070  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:26.966184  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:27.141361  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:27.380438  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:27.466783  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:27.466990  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:27.641444  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:27.878826  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:27.979801  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:27.979850  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:28.140744  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:28.379055  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:28.466053  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:28.466262  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:28.641303  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:28.879679  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:28.965800  255490 kapi.go:107] duration metric: took 43.003520491s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:31:28.965911  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:29.141056  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:29.379115  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:29.466254  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:29.641507  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:29.878636  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:29.965941  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:30.141206  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:30.379857  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:30.477055  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:30.641385  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:30.882411  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:30.966429  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:31.141855  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:31.379745  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:31.465333  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:31.641912  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:31.878750  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:31.967269  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:32.140478  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:32.378472  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:32.466262  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:32.641363  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:32.878949  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:32.967786  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:33.140929  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:33.379057  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:33.466061  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:33.640975  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:33.879276  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:33.966802  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:34.141375  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:34.379084  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:34.466440  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:34.641629  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:34.879379  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:34.966285  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:35.141496  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:35.378932  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:35.465775  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:35.640911  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:35.879933  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:35.968449  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:36.141155  255490 kapi.go:107] duration metric: took 43.503506437s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:31:36.239922  255490 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-658933 cluster.
	I1120 20:31:36.340891  255490 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:31:36.406587  255490 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:31:36.432289  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:36.466587  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:36.879698  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:36.966233  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:37.381934  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:37.465962  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:37.878717  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:37.965558  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:38.378917  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:38.466261  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:38.878887  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:38.965538  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:39.379310  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:39.465857  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:39.879910  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:39.966474  255490 kapi.go:107] duration metric: took 54.004199635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:31:40.378921  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:40.880786  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:41.379335  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:41.879604  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:42.379810  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:42.880290  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:43.379398  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:43.879071  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:44.379045  255490 kapi.go:107] duration metric: took 58.003792137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:31:44.422235  255490 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, storage-provisioner, metrics-server, yakd, cloud-spanner, nvidia-device-plugin, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1120 20:31:44.484100  255490 addons.go:515] duration metric: took 1m0.195604559s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns inspektor-gadget storage-provisioner metrics-server yakd cloud-spanner nvidia-device-plugin default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1120 20:31:44.484183  255490 start.go:247] waiting for cluster config update ...
	I1120 20:31:44.484206  255490 start.go:256] writing updated cluster config ...
	I1120 20:31:44.484522  255490 ssh_runner.go:195] Run: rm -f paused
	I1120 20:31:44.488771  255490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:31:44.492191  255490 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zbjpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.496661  255490 pod_ready.go:94] pod "coredns-66bc5c9577-zbjpk" is "Ready"
	I1120 20:31:44.496685  255490 pod_ready.go:86] duration metric: took 4.449802ms for pod "coredns-66bc5c9577-zbjpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.498585  255490 pod_ready.go:83] waiting for pod "etcd-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.502485  255490 pod_ready.go:94] pod "etcd-addons-658933" is "Ready"
	I1120 20:31:44.502510  255490 pod_ready.go:86] duration metric: took 3.902985ms for pod "etcd-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.504290  255490 pod_ready.go:83] waiting for pod "kube-apiserver-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.508014  255490 pod_ready.go:94] pod "kube-apiserver-addons-658933" is "Ready"
	I1120 20:31:44.508031  255490 pod_ready.go:86] duration metric: took 3.720075ms for pod "kube-apiserver-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.509740  255490 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.893180  255490 pod_ready.go:94] pod "kube-controller-manager-addons-658933" is "Ready"
	I1120 20:31:44.893239  255490 pod_ready.go:86] duration metric: took 383.453382ms for pod "kube-controller-manager-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:45.094021  255490 pod_ready.go:83] waiting for pod "kube-proxy-tkd84" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:45.493525  255490 pod_ready.go:94] pod "kube-proxy-tkd84" is "Ready"
	I1120 20:31:45.493553  255490 pod_ready.go:86] duration metric: took 399.502857ms for pod "kube-proxy-tkd84" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:45.692987  255490 pod_ready.go:83] waiting for pod "kube-scheduler-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:46.091911  255490 pod_ready.go:94] pod "kube-scheduler-addons-658933" is "Ready"
	I1120 20:31:46.091940  255490 pod_ready.go:86] duration metric: took 398.925831ms for pod "kube-scheduler-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:46.091952  255490 pod_ready.go:40] duration metric: took 1.603149594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:31:46.136272  255490 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:31:46.138330  255490 out.go:179] * Done! kubectl is now configured to use "addons-658933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.28359285Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-h47jz/registry-creds" id=a418d81c-6199-4473-af3c-99fad8417a21 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.283735227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.289569961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.290040045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.329328206Z" level=info msg="Created container 936f21b8c96b8b2a1d4ca4e5eb739f937563a662d2e639d6d42f7ea23b484e53: kube-system/registry-creds-764b6fb674-h47jz/registry-creds" id=a418d81c-6199-4473-af3c-99fad8417a21 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.329978346Z" level=info msg="Starting container: 936f21b8c96b8b2a1d4ca4e5eb739f937563a662d2e639d6d42f7ea23b484e53" id=3853b441-2365-42cd-8ecd-674856f3bc11 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 20:33:14 addons-658933 crio[772]: time="2025-11-20T20:33:14.332192023Z" level=info msg="Started container" PID=8946 containerID=936f21b8c96b8b2a1d4ca4e5eb739f937563a662d2e639d6d42f7ea23b484e53 description=kube-system/registry-creds-764b6fb674-h47jz/registry-creds id=3853b441-2365-42cd-8ecd-674856f3bc11 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ddd862bf1fd609b4ee686765f65f63246258ac03c6e72428d48850d53033aa5f
	Nov 20 20:33:38 addons-658933 crio[772]: time="2025-11-20T20:33:38.693786086Z" level=info msg="Stopping pod sandbox: 798b7d73bf46092ee35397d3511da987d8d3314473881c360aeee4e60564e0ed" id=75f0406d-153c-4792-9808-2151749be8b2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 20:33:38 addons-658933 crio[772]: time="2025-11-20T20:33:38.693851807Z" level=info msg="Stopped pod sandbox (already stopped): 798b7d73bf46092ee35397d3511da987d8d3314473881c360aeee4e60564e0ed" id=75f0406d-153c-4792-9808-2151749be8b2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 20:33:38 addons-658933 crio[772]: time="2025-11-20T20:33:38.694181356Z" level=info msg="Removing pod sandbox: 798b7d73bf46092ee35397d3511da987d8d3314473881c360aeee4e60564e0ed" id=74605f41-0c0c-4716-8445-8c0cd497a8c2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 20 20:33:38 addons-658933 crio[772]: time="2025-11-20T20:33:38.697311513Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 20:33:38 addons-658933 crio[772]: time="2025-11-20T20:33:38.697374655Z" level=info msg="Removed pod sandbox: 798b7d73bf46092ee35397d3511da987d8d3314473881c360aeee4e60564e0ed" id=74605f41-0c0c-4716-8445-8c0cd497a8c2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.104380729Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-8jkwh/POD" id=57fd74e9-37dc-4807-aace-742c91ed7dab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.104454075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.110565817Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8jkwh Namespace:default ID:8e4edb111bcb2d727172b1b216ac8e43642faddb8df41c7f247f9a03db0abd9e UID:9cfb0e37-4388-461c-9cd5-350d74063d9b NetNS:/var/run/netns/649ac81f-6640-4d5a-b65a-fc5ebf115793 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00052a858}] Aliases:map[]}"
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.110599083Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-8jkwh to CNI network \"kindnet\" (type=ptp)"
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.120621279Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8jkwh Namespace:default ID:8e4edb111bcb2d727172b1b216ac8e43642faddb8df41c7f247f9a03db0abd9e UID:9cfb0e37-4388-461c-9cd5-350d74063d9b NetNS:/var/run/netns/649ac81f-6640-4d5a-b65a-fc5ebf115793 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00052a858}] Aliases:map[]}"
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.120760557Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-8jkwh for CNI network kindnet (type=ptp)"
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.121619735Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.12239807Z" level=info msg="Ran pod sandbox 8e4edb111bcb2d727172b1b216ac8e43642faddb8df41c7f247f9a03db0abd9e with infra container: default/hello-world-app-5d498dc89-8jkwh/POD" id=57fd74e9-37dc-4807-aace-742c91ed7dab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.123728952Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d43e51f3-3233-4361-9c51-fa5ddf099869 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.123869356Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d43e51f3-3233-4361-9c51-fa5ddf099869 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.12392173Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=d43e51f3-3233-4361-9c51-fa5ddf099869 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.124610235Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=27cd5258-2395-4855-bec6-b0b374ce5bbc name=/runtime.v1.ImageService/PullImage
	Nov 20 20:34:33 addons-658933 crio[772]: time="2025-11-20T20:34:33.132910684Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	936f21b8c96b8       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   ddd862bf1fd60       registry-creds-764b6fb674-h47jz            kube-system
	f122646757263       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   a06e15054beb0       nginx                                      default
	5abeb02d2e1da       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   8cc6ad1024aab       busybox                                    default
	0869a7f04bf4e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	c00315bedde6d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	3dc9d9f32ffaa       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	e84ec310b2afd       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	a124ab10918ee       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago        Running             controller                               0                   78b9702a19e31       ingress-nginx-controller-6c8bf45fb-dsc49   ingress-nginx
	e079a3716f65a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   323a6449d1860       gcp-auth-78565c9fb4-vprfm                  gcp-auth
	564d810ba191a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	fddb94943c333       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago        Running             gadget                                   0                   d41f82cebc1c1       gadget-g5x6v                               gadget
	ca550c41b2a77       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   5dfa51a9926d0       registry-proxy-lq2h5                       kube-system
	6362678378ad4       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   4f3f2ab1ef847       nvidia-device-plugin-daemonset-xkkmp       kube-system
	cf2ac7eff8739       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   3e5156c76b147       amd-gpu-device-plugin-vm8jx                kube-system
	9224e8f92f1b1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	adac6d2c858f2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   3fd3ed518b255       csi-hostpath-attacher-0                    kube-system
	afe1aac38b026       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   626d22e42829b       snapshot-controller-7d9fbc56b8-7fv92       kube-system
	a3370293507b7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   e96a1ab544b70       csi-hostpath-resizer-0                     kube-system
	74e0e28db0b78       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   1a403f5b3750d       local-path-provisioner-648f6765c9-tchwv    local-path-storage
	cfe3cfa35ee4e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   1412ff6800cb4       snapshot-controller-7d9fbc56b8-bxn2q       kube-system
	ca85a97de8f75       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago        Exited              patch                                    1                   5183d8f5f49db       ingress-nginx-admission-patch-b4csh        ingress-nginx
	b453b2b6d5746       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   f187215f51f32       ingress-nginx-admission-create-lwnhv       ingress-nginx
	581473717f5db       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   9657ed102c296       yakd-dashboard-5ff678cb9-rk9b8             yakd-dashboard
	6e59cda6b4475       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago        Running             cloud-spanner-emulator                   0                   ef8fdb25848a8       cloud-spanner-emulator-6f9fcf858b-j7pgx    default
	6d168b8373fd1       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   a6cdfb1765ad1       kube-ingress-dns-minikube                  kube-system
	18df77ead4cf8       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   59c5bb1453517       registry-6b586f9694-zwcwl                  kube-system
	2dc54febfd287       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   ffb3f956f8953       metrics-server-85b7d694d7-z2pc4            kube-system
	c812b6447964f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   626c2e8eee40e       coredns-66bc5c9577-zbjpk                   kube-system
	b9c2a6d4679fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   3f14ab61d1b19       storage-provisioner                        kube-system
	2f3f9b31aedbb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago        Running             kube-proxy                               0                   e3ae50b3edc41       kube-proxy-tkd84                           kube-system
	cb4964e2e68f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago        Running             kindnet-cni                              0                   9173d8b0919c7       kindnet-46wwr                              kube-system
	c51ec37256def       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             3 minutes ago        Running             kube-controller-manager                  0                   4a697d374c6ab       kube-controller-manager-addons-658933      kube-system
	b69462e9ce88e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             3 minutes ago        Running             kube-scheduler                           0                   5eb8d48f67cd9       kube-scheduler-addons-658933               kube-system
	6d905baa8985b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             3 minutes ago        Running             etcd                                     0                   0cba80a333c64       etcd-addons-658933                         kube-system
	4038552b2ad49       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             3 minutes ago        Running             kube-apiserver                           0                   6f6ca67588f9d       kube-apiserver-addons-658933               kube-system
	
	
	==> coredns [c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7] <==
	[INFO] 10.244.0.22:53948 - 51258 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00657152s
	[INFO] 10.244.0.22:36503 - 63266 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004683085s
	[INFO] 10.244.0.22:43082 - 58810 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004787482s
	[INFO] 10.244.0.22:34122 - 64455 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003849588s
	[INFO] 10.244.0.22:35376 - 41719 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005227255s
	[INFO] 10.244.0.22:55418 - 26939 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000887284s
	[INFO] 10.244.0.22:33424 - 36150 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00106945s
	[INFO] 10.244.0.25:37619 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000216893s
	[INFO] 10.244.0.25:47088 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000168955s
	[INFO] 10.244.0.31:46170 - 34488 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000298982s
	[INFO] 10.244.0.31:50493 - 50210 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00034887s
	[INFO] 10.244.0.31:45399 - 12245 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000124623s
	[INFO] 10.244.0.31:45151 - 26622 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000137191s
	[INFO] 10.244.0.31:34014 - 64945 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000118089s
	[INFO] 10.244.0.31:47406 - 50964 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000175774s
	[INFO] 10.244.0.31:39385 - 1531 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003860828s
	[INFO] 10.244.0.31:51838 - 6967 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00416589s
	[INFO] 10.244.0.31:54019 - 23061 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006030193s
	[INFO] 10.244.0.31:34196 - 7957 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.007622746s
	[INFO] 10.244.0.31:54303 - 24475 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004337648s
	[INFO] 10.244.0.31:58840 - 63309 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004417583s
	[INFO] 10.244.0.31:53974 - 58166 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004354649s
	[INFO] 10.244.0.31:48776 - 25101 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004521089s
	[INFO] 10.244.0.31:42684 - 16939 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001886203s
	[INFO] 10.244.0.31:36039 - 24688 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001977489s
	
	
	==> describe nodes <==
	Name:               addons-658933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-658933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-658933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_30_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-658933
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-658933"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:30:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-658933
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:34:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:34:22 +0000   Thu, 20 Nov 2025 20:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:34:22 +0000   Thu, 20 Nov 2025 20:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:34:22 +0000   Thu, 20 Nov 2025 20:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:34:22 +0000   Thu, 20 Nov 2025 20:30:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-658933
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                9c80e830-a2e4-4134-9f57-97b54019831a
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     cloud-spanner-emulator-6f9fcf858b-j7pgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  default                     hello-world-app-5d498dc89-8jkwh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-g5x6v                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  gcp-auth                    gcp-auth-78565c9fb4-vprfm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-dsc49    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m49s
	  kube-system                 amd-gpu-device-plugin-vm8jx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-66bc5c9577-zbjpk                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m50s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 csi-hostpathplugin-z7dj2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-addons-658933                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m56s
	  kube-system                 kindnet-46wwr                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m50s
	  kube-system                 kube-apiserver-addons-658933                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-controller-manager-addons-658933       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-proxy-tkd84                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 kube-scheduler-addons-658933                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 metrics-server-85b7d694d7-z2pc4             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m49s
	  kube-system                 nvidia-device-plugin-daemonset-xkkmp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 registry-6b586f9694-zwcwl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 registry-creds-764b6fb674-h47jz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 registry-proxy-lq2h5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 snapshot-controller-7d9fbc56b8-7fv92        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 snapshot-controller-7d9fbc56b8-bxn2q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  local-path-storage          local-path-provisioner-648f6765c9-tchwv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rk9b8              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m48s  kube-proxy       
	  Normal  Starting                 3m56s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s  kubelet          Node addons-658933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s  kubelet          Node addons-658933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s  kubelet          Node addons-658933 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m51s  node-controller  Node addons-658933 event: Registered Node addons-658933 in Controller
	  Normal  NodeReady                3m39s  kubelet          Node addons-658933 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	[Nov20 20:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.053479] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023936] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +2.047762] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +4.031673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +8.127416] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[ +16.382740] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 20:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	
	
	==> etcd [6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106] <==
	{"level":"warn","ts":"2025-11-20T20:30:35.800641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.806764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.814383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.833259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.839255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.844989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.888100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:46.856132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:46.862986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:05.009858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.904076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:31:05.009956Z","caller":"traceutil/trace.go:172","msg":"trace[2046628416] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"132.058384ms","start":"2025-11-20T20:31:04.877882Z","end":"2025-11-20T20:31:05.009940Z","steps":["trace[2046628416] 'range keys from in-memory index tree'  (duration: 131.800514ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:31:10.979455Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.608197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:31:10.979608Z","caller":"traceutil/trace.go:172","msg":"trace[193603763] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:980; }","duration":"101.774585ms","start":"2025-11-20T20:31:10.877818Z","end":"2025-11-20T20:31:10.979593Z","steps":["trace[193603763] 'range keys from in-memory index tree'  (duration: 101.546536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:31:13.308865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:13.317428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:13.331958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:13.339924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:31:14.951487Z","caller":"traceutil/trace.go:172","msg":"trace[1999540074] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"106.660318ms","start":"2025-11-20T20:31:14.844806Z","end":"2025-11-20T20:31:14.951466Z","steps":["trace[1999540074] 'process raft request'  (duration: 106.539878ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:31:20.377601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.955615ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:31:20.377661Z","caller":"traceutil/trace.go:172","msg":"trace[1843906868] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"139.973519ms","start":"2025-11-20T20:31:20.237665Z","end":"2025-11-20T20:31:20.377639Z","steps":["trace[1843906868] 'process raft request'  (duration: 59.331457ms)","trace[1843906868] 'compare'  (duration: 80.534512ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:31:20.377678Z","caller":"traceutil/trace.go:172","msg":"trace[193963102] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1071; }","duration":"136.052904ms","start":"2025-11-20T20:31:20.241610Z","end":"2025-11-20T20:31:20.377663Z","steps":["trace[193963102] 'range keys from in-memory index tree'  (duration: 135.903091ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.950973Z","caller":"traceutil/trace.go:172","msg":"trace[1605497090] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"109.578152ms","start":"2025-11-20T20:31:25.841379Z","end":"2025-11-20T20:31:25.950957Z","steps":["trace[1605497090] 'process raft request'  (duration: 109.310128ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.951072Z","caller":"traceutil/trace.go:172","msg":"trace[1563468241] transaction","detail":"{read_only:false; response_revision:1099; number_of_response:1; }","duration":"105.824766ms","start":"2025-11-20T20:31:25.845231Z","end":"2025-11-20T20:31:25.951056Z","steps":["trace[1563468241] 'process raft request'  (duration: 105.693332ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.950986Z","caller":"traceutil/trace.go:172","msg":"trace[809948351] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"108.233852ms","start":"2025-11-20T20:31:25.842744Z","end":"2025-11-20T20:31:25.950978Z","steps":["trace[809948351] 'process raft request'  (duration: 108.110564ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.950971Z","caller":"traceutil/trace.go:172","msg":"trace[1456414447] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"108.541361ms","start":"2025-11-20T20:31:25.842400Z","end":"2025-11-20T20:31:25.950941Z","steps":["trace[1456414447] 'process raft request'  (duration: 108.386842ms)"],"step_count":1}
	
	
	==> gcp-auth [e079a3716f65a1edf2f2bd82a1da29c254f9b9edfa58fbb8ded0e021c8f48ab8] <==
	2025/11/20 20:31:35 GCP Auth Webhook started!
	2025/11/20 20:31:46 Ready to marshal response ...
	2025/11/20 20:31:46 Ready to write response ...
	2025/11/20 20:31:46 Ready to marshal response ...
	2025/11/20 20:31:46 Ready to write response ...
	2025/11/20 20:31:46 Ready to marshal response ...
	2025/11/20 20:31:46 Ready to write response ...
	2025/11/20 20:32:04 Ready to marshal response ...
	2025/11/20 20:32:04 Ready to write response ...
	2025/11/20 20:32:04 Ready to marshal response ...
	2025/11/20 20:32:04 Ready to write response ...
	2025/11/20 20:32:06 Ready to marshal response ...
	2025/11/20 20:32:06 Ready to write response ...
	2025/11/20 20:32:07 Ready to marshal response ...
	2025/11/20 20:32:07 Ready to write response ...
	2025/11/20 20:32:13 Ready to marshal response ...
	2025/11/20 20:32:13 Ready to write response ...
	2025/11/20 20:32:14 Ready to marshal response ...
	2025/11/20 20:32:14 Ready to write response ...
	2025/11/20 20:32:44 Ready to marshal response ...
	2025/11/20 20:32:44 Ready to write response ...
	2025/11/20 20:34:32 Ready to marshal response ...
	2025/11/20 20:34:32 Ready to write response ...
	
	
	==> kernel <==
	 20:34:34 up  3:16,  0 user,  load average: 0.39, 1.19, 1.07
	Linux addons-658933 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3] <==
	I1120 20:32:25.539268       1 main.go:301] handling current node
	I1120 20:32:35.539206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:32:35.539255       1 main.go:301] handling current node
	I1120 20:32:45.539203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:32:45.539257       1 main.go:301] handling current node
	I1120 20:32:55.538285       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:32:55.538345       1 main.go:301] handling current node
	I1120 20:33:05.538554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:33:05.538589       1 main.go:301] handling current node
	I1120 20:33:15.538449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:33:15.538478       1 main.go:301] handling current node
	I1120 20:33:25.538851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:33:25.538882       1 main.go:301] handling current node
	I1120 20:33:35.538404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:33:35.538435       1 main.go:301] handling current node
	I1120 20:33:45.539041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:33:45.539087       1 main.go:301] handling current node
	I1120 20:33:55.538713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:33:55.538755       1 main.go:301] handling current node
	I1120 20:34:05.538452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:34:05.538484       1 main.go:301] handling current node
	I1120 20:34:15.538550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:34:15.538582       1 main.go:301] handling current node
	I1120 20:34:25.538423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:34:25.538465       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e] <==
	W1120 20:31:00.047524       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:31:00.047568       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 20:31:00.047592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 20:31:00.047668       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 20:31:00.048809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 20:31:04.064547       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:31:04.064636       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 20:31:04.064677       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1120 20:31:04.076718       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1120 20:31:13.308795       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 20:31:13.317336       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 20:31:13.331919       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 20:31:13.339929       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1120 20:31:55.844411       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44948: use of closed network connection
	E1120 20:31:56.007342       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44976: use of closed network connection
	I1120 20:32:07.123021       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1120 20:32:07.337121       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.9.185"}
	I1120 20:32:25.175365       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1120 20:34:32.867235       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.37.20"}
	
	
	==> kube-controller-manager [c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f] <==
	I1120 20:30:43.288013       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:30:43.288020       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 20:30:43.288101       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-658933"
	I1120 20:30:43.288164       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 20:30:43.288329       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:30:43.288503       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 20:30:43.288559       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 20:30:43.288644       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:30:43.288664       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:30:43.289359       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 20:30:43.289377       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:30:43.289388       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 20:30:43.289426       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 20:30:43.289484       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 20:30:43.290629       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:30:43.291797       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:30:43.292638       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:30:43.294729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:30:43.312859       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:30:58.291250       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1120 20:31:13.300936       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 20:31:13.300985       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 20:31:13.324311       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:31:13.401480       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:31:13.424839       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb] <==
	I1120 20:30:45.272018       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:30:45.560864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:30:45.667314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:30:45.670429       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 20:30:45.670725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:30:45.723636       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:30:45.723780       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:30:45.743087       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:30:45.743541       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:30:45.743559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:30:45.745531       1 config.go:200] "Starting service config controller"
	I1120 20:30:45.745594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:30:45.746004       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:30:45.746683       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:30:45.746114       1 config.go:309] "Starting node config controller"
	I1120 20:30:45.746793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:30:45.746825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:30:45.746408       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:30:45.746868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:30:45.845793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:30:45.852808       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:30:45.854634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037] <==
	E1120 20:30:36.300247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:30:36.300244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:30:36.300293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:30:36.300251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:30:36.300341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:30:36.300344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:30:36.300363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:30:36.300369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:30:36.300382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:30:36.300397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:30:36.300448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:30:36.300490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:30:36.300499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:30:36.300549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:30:36.300585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:30:37.138426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:30:37.183809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:30:37.207208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:30:37.370114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:30:37.398268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:30:37.442341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:30:37.452455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:30:37.492834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:30:37.533088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1120 20:30:39.897865       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:32:46 addons-658933 kubelet[1301]: I1120 20:32:46.629143    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lq2h5" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.912345    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6blx2\" (UniqueName: \"kubernetes.io/projected/512ded7a-b134-432e-a3e3-59d7976cb5b7-kube-api-access-6blx2\") pod \"512ded7a-b134-432e-a3e3-59d7976cb5b7\" (UID: \"512ded7a-b134-432e-a3e3-59d7976cb5b7\") "
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.912410    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/512ded7a-b134-432e-a3e3-59d7976cb5b7-gcp-creds\") pod \"512ded7a-b134-432e-a3e3-59d7976cb5b7\" (UID: \"512ded7a-b134-432e-a3e3-59d7976cb5b7\") "
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.912551    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^116efbed-c650-11f0-9b66-be77edbc64fb\") pod \"512ded7a-b134-432e-a3e3-59d7976cb5b7\" (UID: \"512ded7a-b134-432e-a3e3-59d7976cb5b7\") "
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.912577    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512ded7a-b134-432e-a3e3-59d7976cb5b7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "512ded7a-b134-432e-a3e3-59d7976cb5b7" (UID: "512ded7a-b134-432e-a3e3-59d7976cb5b7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.912754    1301 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/512ded7a-b134-432e-a3e3-59d7976cb5b7-gcp-creds\") on node \"addons-658933\" DevicePath \"\""
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.915024    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512ded7a-b134-432e-a3e3-59d7976cb5b7-kube-api-access-6blx2" (OuterVolumeSpecName: "kube-api-access-6blx2") pod "512ded7a-b134-432e-a3e3-59d7976cb5b7" (UID: "512ded7a-b134-432e-a3e3-59d7976cb5b7"). InnerVolumeSpecName "kube-api-access-6blx2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 20 20:32:51 addons-658933 kubelet[1301]: I1120 20:32:51.916146    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^116efbed-c650-11f0-9b66-be77edbc64fb" (OuterVolumeSpecName: "task-pv-storage") pod "512ded7a-b134-432e-a3e3-59d7976cb5b7" (UID: "512ded7a-b134-432e-a3e3-59d7976cb5b7"). InnerVolumeSpecName "pvc-d257b179-f2b0-497a-9916-0f55187e8b1a". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.013486    1301 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-d257b179-f2b0-497a-9916-0f55187e8b1a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^116efbed-c650-11f0-9b66-be77edbc64fb\") on node \"addons-658933\" "
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.013518    1301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6blx2\" (UniqueName: \"kubernetes.io/projected/512ded7a-b134-432e-a3e3-59d7976cb5b7-kube-api-access-6blx2\") on node \"addons-658933\" DevicePath \"\""
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.017648    1301 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-d257b179-f2b0-497a-9916-0f55187e8b1a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^116efbed-c650-11f0-9b66-be77edbc64fb") on node "addons-658933"
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.113820    1301 reconciler_common.go:299] "Volume detached for volume \"pvc-d257b179-f2b0-497a-9916-0f55187e8b1a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^116efbed-c650-11f0-9b66-be77edbc64fb\") on node \"addons-658933\" DevicePath \"\""
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.261830    1301 scope.go:117] "RemoveContainer" containerID="43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0"
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.270659    1301 scope.go:117] "RemoveContainer" containerID="43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0"
	Nov 20 20:32:52 addons-658933 kubelet[1301]: E1120 20:32:52.271053    1301 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0\": container with ID starting with 43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0 not found: ID does not exist" containerID="43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0"
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.271099    1301 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0"} err="failed to get container status \"43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0\": rpc error: code = NotFound desc = could not find container \"43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0\": container with ID starting with 43d2361a2386265e6ff5a1c1ce36b5a34b8e82e298506862e16f0e70ea0774a0 not found: ID does not exist"
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.626593    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vm8jx" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:32:52 addons-658933 kubelet[1301]: I1120 20:32:52.629896    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512ded7a-b134-432e-a3e3-59d7976cb5b7" path="/var/lib/kubelet/pods/512ded7a-b134-432e-a3e3-59d7976cb5b7/volumes"
	Nov 20 20:32:58 addons-658933 kubelet[1301]: E1120 20:32:58.670294    1301 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-h47jz" podUID="730db9ac-a022-4b3f-a29d-60b579072144"
	Nov 20 20:33:14 addons-658933 kubelet[1301]: I1120 20:33:14.364756    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-h47jz" podStartSLOduration=148.76638571 podStartE2EDuration="2m30.364735236s" podCreationTimestamp="2025-11-20 20:30:44 +0000 UTC" firstStartedPulling="2025-11-20 20:33:12.649318055 +0000 UTC m=+154.108958574" lastFinishedPulling="2025-11-20 20:33:14.24766757 +0000 UTC m=+155.707308100" observedRunningTime="2025-11-20 20:33:14.364644744 +0000 UTC m=+155.824285282" watchObservedRunningTime="2025-11-20 20:33:14.364735236 +0000 UTC m=+155.824375774"
	Nov 20 20:33:56 addons-658933 kubelet[1301]: I1120 20:33:56.626123    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xkkmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:33:58 addons-658933 kubelet[1301]: I1120 20:33:58.627474    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lq2h5" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:34:19 addons-658933 kubelet[1301]: I1120 20:34:19.626348    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vm8jx" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:34:32 addons-658933 kubelet[1301]: I1120 20:34:32.944811    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9cfb0e37-4388-461c-9cd5-350d74063d9b-gcp-creds\") pod \"hello-world-app-5d498dc89-8jkwh\" (UID: \"9cfb0e37-4388-461c-9cd5-350d74063d9b\") " pod="default/hello-world-app-5d498dc89-8jkwh"
	Nov 20 20:34:32 addons-658933 kubelet[1301]: I1120 20:34:32.944895    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngt2f\" (UniqueName: \"kubernetes.io/projected/9cfb0e37-4388-461c-9cd5-350d74063d9b-kube-api-access-ngt2f\") pod \"hello-world-app-5d498dc89-8jkwh\" (UID: \"9cfb0e37-4388-461c-9cd5-350d74063d9b\") " pod="default/hello-world-app-5d498dc89-8jkwh"
	
	
	==> storage-provisioner [b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b] <==
	W1120 20:34:09.136298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:11.139649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:11.144007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:13.147505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:13.152801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:15.155998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:15.160040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:17.163345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:17.168820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:19.172009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:19.175974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:21.179527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:21.183311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:23.186158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:23.190371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:25.193602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:25.197964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:27.201375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:27.205946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:29.209474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:29.214672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:31.218277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:31.222486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:33.226559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:34:33.231503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-658933 -n addons-658933
helpers_test.go:269: (dbg) Run:  kubectl --context addons-658933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-658933 describe pod ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-658933 describe pod ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh: exit status 1 (57.107884ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lwnhv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b4csh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-658933 describe pod ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (249.448891ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:34:35.290504  269950 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:34:35.290801  269950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:34:35.290812  269950 out.go:374] Setting ErrFile to fd 2...
	I1120 20:34:35.290818  269950 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:34:35.291011  269950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:34:35.291281  269950 mustload.go:66] Loading cluster: addons-658933
	I1120 20:34:35.291612  269950 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:34:35.291628  269950 addons.go:607] checking whether the cluster is paused
	I1120 20:34:35.291713  269950 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:34:35.291729  269950 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:34:35.292167  269950 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:34:35.309775  269950 ssh_runner.go:195] Run: systemctl --version
	I1120 20:34:35.309829  269950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:34:35.328158  269950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:34:35.423639  269950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:34:35.423731  269950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:34:35.455805  269950 cri.go:89] found id: "936f21b8c96b8b2a1d4ca4e5eb739f937563a662d2e639d6d42f7ea23b484e53"
	I1120 20:34:35.455827  269950 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:34:35.455832  269950 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:34:35.455834  269950 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:34:35.455837  269950 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:34:35.455840  269950 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:34:35.455843  269950 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:34:35.455845  269950 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:34:35.455847  269950 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:34:35.455852  269950 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:34:35.455855  269950 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:34:35.455857  269950 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:34:35.455860  269950 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:34:35.455862  269950 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:34:35.455864  269950 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:34:35.455879  269950 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:34:35.455882  269950 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:34:35.455886  269950 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:34:35.455920  269950 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:34:35.455927  269950 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:34:35.455931  269950 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:34:35.455934  269950 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:34:35.455937  269950 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:34:35.455939  269950 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:34:35.455942  269950 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:34:35.455944  269950 cri.go:89] found id: ""
	I1120 20:34:35.455990  269950 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:34:35.470389  269950 out.go:203] 
	W1120 20:34:35.471546  269950 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:34:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:34:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:34:35.471563  269950 out.go:285] * 
	* 
	W1120 20:34:35.475716  269950 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:34:35.476928  269950 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable ingress --alsologtostderr -v=1: exit status 11 (245.927978ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:34:35.539876  270013 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:34:35.540170  270013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:34:35.540181  270013 out.go:374] Setting ErrFile to fd 2...
	I1120 20:34:35.540186  270013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:34:35.540409  270013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:34:35.540678  270013 mustload.go:66] Loading cluster: addons-658933
	I1120 20:34:35.541028  270013 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:34:35.541045  270013 addons.go:607] checking whether the cluster is paused
	I1120 20:34:35.541128  270013 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:34:35.541140  270013 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:34:35.541544  270013 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:34:35.559315  270013 ssh_runner.go:195] Run: systemctl --version
	I1120 20:34:35.559370  270013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:34:35.576936  270013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:34:35.671664  270013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:34:35.671739  270013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:34:35.701772  270013 cri.go:89] found id: "936f21b8c96b8b2a1d4ca4e5eb739f937563a662d2e639d6d42f7ea23b484e53"
	I1120 20:34:35.701799  270013 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:34:35.701805  270013 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:34:35.701808  270013 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:34:35.701822  270013 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:34:35.701826  270013 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:34:35.701829  270013 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:34:35.701831  270013 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:34:35.701834  270013 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:34:35.701839  270013 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:34:35.701842  270013 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:34:35.701846  270013 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:34:35.701865  270013 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:34:35.701873  270013 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:34:35.701878  270013 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:34:35.701892  270013 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:34:35.701898  270013 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:34:35.701902  270013 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:34:35.701904  270013 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:34:35.701906  270013 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:34:35.701909  270013 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:34:35.701911  270013 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:34:35.701913  270013 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:34:35.701916  270013 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:34:35.701918  270013 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:34:35.701920  270013 cri.go:89] found id: ""
	I1120 20:34:35.701966  270013 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:34:35.716211  270013 out.go:203] 
	W1120 20:34:35.717392  270013 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:34:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:34:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:34:35.717414  270013 out.go:285] * 
	* 
	W1120 20:34:35.721586  270013 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:34:35.722685  270013 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-g5x6v" [ff85ea18-bb60-44f6-b945-740114d73e77] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004464834s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (291.34317ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:16.641015  266695 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:16.641340  266695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:16.641354  266695 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:16.641358  266695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:16.641666  266695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:16.642036  266695 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:16.642580  266695 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:16.642615  266695 addons.go:607] checking whether the cluster is paused
	I1120 20:32:16.642767  266695 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:16.642791  266695 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:16.643345  266695 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:16.667026  266695 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:16.667090  266695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:16.690029  266695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:16.794696  266695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:16.794820  266695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:16.833072  266695 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:16.833098  266695 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:16.833104  266695 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:16.833109  266695 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:16.833112  266695 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:16.833116  266695 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:16.833118  266695 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:16.833121  266695 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:16.833124  266695 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:16.833129  266695 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:16.833132  266695 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:16.833134  266695 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:16.833137  266695 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:16.833141  266695 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:16.833150  266695 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:16.833161  266695 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:16.833170  266695 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:16.833176  266695 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:16.833180  266695 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:16.833184  266695 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:16.833191  266695 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:16.833196  266695 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:16.833203  266695 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:16.833207  266695 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:16.833232  266695 cri.go:89] found id: ""
	I1120 20:32:16.833282  266695 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:16.852030  266695 out.go:203] 
	W1120 20:32:16.853577  266695 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:16.853592  266695 out.go:285] * 
	* 
	W1120 20:32:16.858193  266695 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:16.859762  266695 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.30896ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002862462s
addons_test.go:463: (dbg) Run:  kubectl --context addons-658933 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (249.95319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:01.393373  264564 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:01.393657  264564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:01.393674  264564 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:01.393678  264564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:01.393855  264564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:01.394130  264564 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:01.394502  264564 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:01.394520  264564 addons.go:607] checking whether the cluster is paused
	I1120 20:32:01.394603  264564 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:01.394615  264564 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:01.394998  264564 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:01.413121  264564 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:01.413168  264564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:01.431654  264564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:01.526999  264564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:01.527074  264564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:01.558349  264564 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:01.558372  264564 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:01.558376  264564 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:01.558379  264564 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:01.558382  264564 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:01.558385  264564 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:01.558388  264564 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:01.558390  264564 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:01.558393  264564 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:01.558413  264564 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:01.558416  264564 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:01.558418  264564 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:01.558421  264564 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:01.558425  264564 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:01.558427  264564 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:01.558434  264564 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:01.558437  264564 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:01.558444  264564 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:01.558447  264564 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:01.558451  264564 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:01.558458  264564 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:01.558466  264564 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:01.558469  264564 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:01.558471  264564 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:01.558474  264564 cri.go:89] found id: ""
	I1120 20:32:01.558511  264564 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:01.573384  264564 out.go:203] 
	W1120 20:32:01.574668  264564 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:01.574689  264564 out.go:285] * 
	* 
	W1120 20:32:01.578771  264564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:01.579951  264564 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1120 20:32:06.796115  254094 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1120 20:32:06.799682  254094 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1120 20:32:06.799707  254094 kapi.go:107] duration metric: took 3.617185ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.631286ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-658933 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-658933 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [5596a0f9-74eb-48b8-9615-b913b0657342] Pending
helpers_test.go:352: "task-pv-pod" [5596a0f9-74eb-48b8-9615-b913b0657342] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [5596a0f9-74eb-48b8-9615-b913b0657342] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003921793s
addons_test.go:572: (dbg) Run:  kubectl --context addons-658933 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-658933 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-658933 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-658933 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-658933 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-658933 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-658933 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [512ded7a-b134-432e-a3e3-59d7976cb5b7] Pending
helpers_test.go:352: "task-pv-pod-restore" [512ded7a-b134-432e-a3e3-59d7976cb5b7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [512ded7a-b134-432e-a3e3-59d7976cb5b7] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004237634s
addons_test.go:614: (dbg) Run:  kubectl --context addons-658933 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-658933 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-658933 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (245.756797ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:52.666860  267764 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:52.667123  267764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:52.667132  267764 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:52.667136  267764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:52.667334  267764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:52.667622  267764 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:52.667952  267764 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:52.667966  267764 addons.go:607] checking whether the cluster is paused
	I1120 20:32:52.668043  267764 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:52.668054  267764 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:52.668436  267764 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:52.687249  267764 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:52.687305  267764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:52.704519  267764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:52.800074  267764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:52.800151  267764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:52.829695  267764 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:52.829724  267764 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:52.829729  267764 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:52.829735  267764 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:52.829740  267764 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:52.829746  267764 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:52.829750  267764 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:52.829753  267764 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:52.829757  267764 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:52.829769  267764 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:52.829773  267764 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:52.829776  267764 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:52.829779  267764 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:52.829782  267764 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:52.829784  267764 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:52.829788  267764 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:52.829791  267764 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:52.829796  267764 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:52.829798  267764 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:52.829801  267764 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:52.829803  267764 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:52.829806  267764 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:52.829808  267764 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:52.829811  267764 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:52.829814  267764 cri.go:89] found id: ""
	I1120 20:32:52.829852  267764 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:52.844116  267764 out.go:203] 
	W1120 20:32:52.845331  267764 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:52.845358  267764 out.go:285] * 
	* 
	W1120 20:32:52.849576  267764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:52.850831  267764 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (244.439819ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:52.911917  267826 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:52.912276  267826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:52.912291  267826 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:52.912298  267826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:52.912730  267826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:52.913427  267826 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:52.913799  267826 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:52.913817  267826 addons.go:607] checking whether the cluster is paused
	I1120 20:32:52.913928  267826 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:52.913942  267826 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:52.914329  267826 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:52.931915  267826 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:52.931974  267826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:52.949475  267826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:53.043808  267826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:53.043898  267826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:53.074590  267826 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:53.074625  267826 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:53.074630  267826 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:53.074633  267826 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:53.074635  267826 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:53.074639  267826 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:53.074641  267826 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:53.074643  267826 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:53.074646  267826 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:53.074651  267826 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:53.074654  267826 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:53.074663  267826 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:53.074668  267826 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:53.074670  267826 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:53.074673  267826 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:53.074680  267826 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:53.074685  267826 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:53.074688  267826 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:53.074691  267826 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:53.074693  267826 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:53.074698  267826 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:53.074701  267826 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:53.074703  267826 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:53.074706  267826 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:53.074708  267826 cri.go:89] found id: ""
	I1120 20:32:53.074757  267826 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:53.089063  267826 out.go:203] 
	W1120 20:32:53.090314  267826 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:53.090337  267826 out.go:285] * 
	* 
	W1120 20:32:53.094383  267826 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:53.096066  267826 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-658933 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-658933 --alsologtostderr -v=1: exit status 11 (246.735786ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:31:56.321632  263726 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:31:56.321889  263726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:56.321897  263726 out.go:374] Setting ErrFile to fd 2...
	I1120 20:31:56.321902  263726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:56.322087  263726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:31:56.322380  263726 mustload.go:66] Loading cluster: addons-658933
	I1120 20:31:56.322731  263726 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:31:56.322747  263726 addons.go:607] checking whether the cluster is paused
	I1120 20:31:56.322825  263726 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:31:56.322836  263726 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:31:56.323226  263726 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:31:56.341069  263726 ssh_runner.go:195] Run: systemctl --version
	I1120 20:31:56.341144  263726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:31:56.359282  263726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:31:56.454048  263726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:31:56.454158  263726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:31:56.483179  263726 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:31:56.483204  263726 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:31:56.483209  263726 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:31:56.483225  263726 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:31:56.483230  263726 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:31:56.483235  263726 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:31:56.483238  263726 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:31:56.483242  263726 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:31:56.483245  263726 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:31:56.483258  263726 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:31:56.483262  263726 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:31:56.483267  263726 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:31:56.483271  263726 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:31:56.483276  263726 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:31:56.483283  263726 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:31:56.483300  263726 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:31:56.483310  263726 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:31:56.483317  263726 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:31:56.483321  263726 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:31:56.483325  263726 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:31:56.483329  263726 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:31:56.483332  263726 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:31:56.483336  263726 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:31:56.483340  263726 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:31:56.483344  263726 cri.go:89] found id: ""
	I1120 20:31:56.483388  263726 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:31:56.497788  263726 out.go:203] 
	W1120 20:31:56.499320  263726 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:31:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:31:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:31:56.499350  263726 out.go:285] * 
	* 
	W1120 20:31:56.503447  263726 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:31:56.504690  263726 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-658933 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-658933
helpers_test.go:243: (dbg) docker inspect addons-658933:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b",
	        "Created": "2025-11-20T20:30:24.502961543Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:30:24.541712996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/hosts",
	        "LogPath": "/var/lib/docker/containers/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b/3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b-json.log",
	        "Name": "/addons-658933",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-658933:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-658933",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3be029a1d6b79a89f03750b38c49c8b6be7c45373317ebf3e840edd728ec1c4b",
	                "LowerDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce7aeae0d0f129a16b46cfebd287e986b474dbe2f4746a5c880ae6d9cab656c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-658933",
	                "Source": "/var/lib/docker/volumes/addons-658933/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-658933",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-658933",
	                "name.minikube.sigs.k8s.io": "addons-658933",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8017a3fa4ff1d37c03e76184cd5c7dbf9fa32535c90958bcc30111c83a76d350",
	            "SandboxKey": "/var/run/docker/netns/8017a3fa4ff1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-658933": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4f40f7eea169273ab77e6077b611a0da6256676e25bf36be34a5384e5d64e88",
	                    "EndpointID": "a9c9dd9f392edfea3d71e8f8eef5854b3c5fc1703a76651ca67766835fc14d27",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ca:8a:06:a5:f1:e9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-658933",
	                        "3be029a1d6b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-658933 -n addons-658933
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-658933 logs -n 25: (1.13509624s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-460922 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-460922   │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │ 20 Nov 25 20:29 UTC │
	│ delete  │ -p download-only-460922                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-460922   │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │ 20 Nov 25 20:29 UTC │
	│ start   │ -o=json --download-only -p download-only-839800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-839800   │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ delete  │ -p download-only-839800                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-839800   │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ delete  │ -p download-only-460922                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-460922   │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ delete  │ -p download-only-839800                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-839800   │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ start   │ --download-only -p download-docker-822958 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-822958 │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ delete  │ -p download-docker-822958                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-822958 │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ start   │ --download-only -p binary-mirror-393068 --alsologtostderr --binary-mirror http://127.0.0.1:39031 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-393068   │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ delete  │ -p binary-mirror-393068                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-393068   │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:30 UTC │
	│ addons  │ disable dashboard -p addons-658933                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-658933          │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-658933                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-658933          │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │                     │
	│ start   │ -p addons-658933 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-658933          │ jenkins │ v1.37.0 │ 20 Nov 25 20:30 UTC │ 20 Nov 25 20:31 UTC │
	│ addons  │ addons-658933 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-658933          │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	│ addons  │ addons-658933 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-658933          │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-658933 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-658933          │ jenkins │ v1.37.0 │ 20 Nov 25 20:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:30:02
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:30:02.705354  255490 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:30:02.705456  255490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:30:02.705461  255490 out.go:374] Setting ErrFile to fd 2...
	I1120 20:30:02.705464  255490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:30:02.705685  255490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:30:02.706192  255490 out.go:368] Setting JSON to false
	I1120 20:30:02.707020  255490 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11545,"bootTime":1763659058,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:30:02.707083  255490 start.go:143] virtualization: kvm guest
	I1120 20:30:02.709012  255490 out.go:179] * [addons-658933] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:30:02.710447  255490 notify.go:221] Checking for updates...
	I1120 20:30:02.710487  255490 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:30:02.711914  255490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:30:02.713304  255490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:30:02.714478  255490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:30:02.715547  255490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:30:02.716631  255490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:30:02.717961  255490 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:30:02.742910  255490 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:30:02.743067  255490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:30:02.803008  255490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-20 20:30:02.793237887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:30:02.803134  255490 docker.go:319] overlay module found
	I1120 20:30:02.804857  255490 out.go:179] * Using the docker driver based on user configuration
	I1120 20:30:02.806259  255490 start.go:309] selected driver: docker
	I1120 20:30:02.806338  255490 start.go:930] validating driver "docker" against <nil>
	I1120 20:30:02.806383  255490 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:30:02.807612  255490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:30:02.873925  255490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-20 20:30:02.863635767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:30:02.874116  255490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:30:02.874369  255490 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:30:02.875957  255490 out.go:179] * Using Docker driver with root privileges
	I1120 20:30:02.877277  255490 cni.go:84] Creating CNI manager for ""
	I1120 20:30:02.877346  255490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:30:02.877357  255490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:30:02.877423  255490 start.go:353] cluster config:
	{Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1120 20:30:02.878682  255490 out.go:179] * Starting "addons-658933" primary control-plane node in "addons-658933" cluster
	I1120 20:30:02.879751  255490 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:30:02.880891  255490 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:30:02.881934  255490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:30:02.881968  255490 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:30:02.881982  255490 cache.go:65] Caching tarball of preloaded images
	I1120 20:30:02.882037  255490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:30:02.882096  255490 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:30:02.882111  255490 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:30:02.882518  255490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/config.json ...
	I1120 20:30:02.882552  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/config.json: {Name:mk4543ab9ea947efde347f2a2be95a3ca7691a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:02.899903  255490 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:30:02.900030  255490 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 20:30:02.900057  255490 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 20:30:02.900065  255490 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 20:30:02.900071  255490 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 20:30:02.900077  255490 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1120 20:30:16.112575  255490 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1120 20:30:16.112627  255490 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:30:16.112697  255490 start.go:360] acquireMachinesLock for addons-658933: {Name:mkb5841ba9dc697afe54624d0d76909a3356842e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:30:16.112817  255490 start.go:364] duration metric: took 94.287µs to acquireMachinesLock for "addons-658933"
	I1120 20:30:16.112844  255490 start.go:93] Provisioning new machine with config: &{Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:30:16.112928  255490 start.go:125] createHost starting for "" (driver="docker")
	I1120 20:30:16.114639  255490 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1120 20:30:16.114876  255490 start.go:159] libmachine.API.Create for "addons-658933" (driver="docker")
	I1120 20:30:16.114904  255490 client.go:173] LocalClient.Create starting
	I1120 20:30:16.115032  255490 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 20:30:16.330553  255490 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 20:30:16.534465  255490 cli_runner.go:164] Run: docker network inspect addons-658933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 20:30:16.552562  255490 cli_runner.go:211] docker network inspect addons-658933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 20:30:16.552636  255490 network_create.go:284] running [docker network inspect addons-658933] to gather additional debugging logs...
	I1120 20:30:16.552656  255490 cli_runner.go:164] Run: docker network inspect addons-658933
	W1120 20:30:16.568418  255490 cli_runner.go:211] docker network inspect addons-658933 returned with exit code 1
	I1120 20:30:16.568472  255490 network_create.go:287] error running [docker network inspect addons-658933]: docker network inspect addons-658933: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-658933 not found
	I1120 20:30:16.568485  255490 network_create.go:289] output of [docker network inspect addons-658933]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-658933 not found
	
	** /stderr **
	I1120 20:30:16.568580  255490 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:30:16.585933  255490 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0026d0b20}
	I1120 20:30:16.585999  255490 network_create.go:124] attempt to create docker network addons-658933 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1120 20:30:16.586054  255490 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-658933 addons-658933
	I1120 20:30:16.634577  255490 network_create.go:108] docker network addons-658933 192.168.49.0/24 created
	I1120 20:30:16.634611  255490 kic.go:121] calculated static IP "192.168.49.2" for the "addons-658933" container
	I1120 20:30:16.634679  255490 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 20:30:16.651593  255490 cli_runner.go:164] Run: docker volume create addons-658933 --label name.minikube.sigs.k8s.io=addons-658933 --label created_by.minikube.sigs.k8s.io=true
	I1120 20:30:16.669593  255490 oci.go:103] Successfully created a docker volume addons-658933
	I1120 20:30:16.669690  255490 cli_runner.go:164] Run: docker run --rm --name addons-658933-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-658933 --entrypoint /usr/bin/test -v addons-658933:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 20:30:20.129642  255490 cli_runner.go:217] Completed: docker run --rm --name addons-658933-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-658933 --entrypoint /usr/bin/test -v addons-658933:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (3.459902314s)
	I1120 20:30:20.129679  255490 oci.go:107] Successfully prepared a docker volume addons-658933
	I1120 20:30:20.129720  255490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:30:20.129740  255490 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 20:30:20.129810  255490 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-658933:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 20:30:24.432158  255490 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-658933:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.302289728s)
	I1120 20:30:24.432198  255490 kic.go:203] duration metric: took 4.302454483s to extract preloaded images to volume ...
	W1120 20:30:24.432344  255490 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 20:30:24.432389  255490 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 20:30:24.432441  255490 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 20:30:24.487095  255490 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-658933 --name addons-658933 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-658933 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-658933 --network addons-658933 --ip 192.168.49.2 --volume addons-658933:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 20:30:24.801956  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Running}}
	I1120 20:30:24.820203  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:24.838694  255490 cli_runner.go:164] Run: docker exec addons-658933 stat /var/lib/dpkg/alternatives/iptables
	I1120 20:30:24.888132  255490 oci.go:144] the created container "addons-658933" has a running status.
	I1120 20:30:24.888171  255490 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa...
	I1120 20:30:25.070883  255490 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 20:30:25.101013  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:25.131534  255490 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 20:30:25.131593  255490 kic_runner.go:114] Args: [docker exec --privileged addons-658933 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 20:30:25.184394  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:25.204188  255490 machine.go:94] provisionDockerMachine start ...
	I1120 20:30:25.204326  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.225894  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:25.226203  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:25.226238  255490 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:30:25.362263  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-658933
	
	I1120 20:30:25.362302  255490 ubuntu.go:182] provisioning hostname "addons-658933"
	I1120 20:30:25.362374  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.380874  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:25.381186  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:25.381211  255490 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-658933 && echo "addons-658933" | sudo tee /etc/hostname
	I1120 20:30:25.523323  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-658933
	
	I1120 20:30:25.523395  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.541855  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:25.542077  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:25.542093  255490 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-658933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-658933/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-658933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:30:25.674264  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:30:25.674298  255490 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:30:25.674316  255490 ubuntu.go:190] setting up certificates
	I1120 20:30:25.674325  255490 provision.go:84] configureAuth start
	I1120 20:30:25.674387  255490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-658933
	I1120 20:30:25.692070  255490 provision.go:143] copyHostCerts
	I1120 20:30:25.692159  255490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:30:25.692306  255490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:30:25.692378  255490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:30:25.692434  255490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.addons-658933 san=[127.0.0.1 192.168.49.2 addons-658933 localhost minikube]
	I1120 20:30:25.918654  255490 provision.go:177] copyRemoteCerts
	I1120 20:30:25.918727  255490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:30:25.918764  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:25.936658  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.031611  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:30:26.051080  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:30:26.068014  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:30:26.085192  255490 provision.go:87] duration metric: took 410.851336ms to configureAuth
	I1120 20:30:26.085234  255490 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:30:26.085415  255490 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:30:26.085525  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.103290  255490 main.go:143] libmachine: Using SSH client type: native
	I1120 20:30:26.103498  255490 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1120 20:30:26.103516  255490 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:30:26.379766  255490 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:30:26.379794  255490 machine.go:97] duration metric: took 1.175562439s to provisionDockerMachine
	I1120 20:30:26.379804  255490 client.go:176] duration metric: took 10.264891227s to LocalClient.Create
	I1120 20:30:26.379823  255490 start.go:167] duration metric: took 10.264948521s to libmachine.API.Create "addons-658933"
	I1120 20:30:26.379848  255490 start.go:293] postStartSetup for "addons-658933" (driver="docker")
	I1120 20:30:26.379857  255490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:30:26.379911  255490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:30:26.379951  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.398162  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.494237  255490 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:30:26.497838  255490 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:30:26.497867  255490 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:30:26.497880  255490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:30:26.497937  255490 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:30:26.497971  255490 start.go:296] duration metric: took 118.116765ms for postStartSetup
	I1120 20:30:26.498281  255490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-658933
	I1120 20:30:26.515591  255490 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/config.json ...
	I1120 20:30:26.515956  255490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:30:26.516010  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.533439  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.625724  255490 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:30:26.630380  255490 start.go:128] duration metric: took 10.517434615s to createHost
	I1120 20:30:26.630412  255490 start.go:83] releasing machines lock for "addons-658933", held for 10.517579755s
	I1120 20:30:26.630484  255490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-658933
	I1120 20:30:26.648987  255490 ssh_runner.go:195] Run: cat /version.json
	I1120 20:30:26.649049  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.649058  255490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:30:26.649121  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:26.667742  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.667789  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:26.817535  255490 ssh_runner.go:195] Run: systemctl --version
	I1120 20:30:26.823827  255490 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:30:26.858465  255490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:30:26.863084  255490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:30:26.863146  255490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:30:26.888392  255490 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 20:30:26.888415  255490 start.go:496] detecting cgroup driver to use...
	I1120 20:30:26.888448  255490 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:30:26.888496  255490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:30:26.903744  255490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:30:26.915977  255490 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:30:26.916037  255490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:30:26.932151  255490 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:30:26.949454  255490 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:30:27.029821  255490 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:30:27.117850  255490 docker.go:234] disabling docker service ...
	I1120 20:30:27.117908  255490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:30:27.136083  255490 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:30:27.148242  255490 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:30:27.228284  255490 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:30:27.308696  255490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:30:27.321107  255490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:30:27.335041  255490 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:30:27.335120  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.345736  255490 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:30:27.345814  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.354928  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.363478  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.372559  255490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:30:27.380575  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.389050  255490 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.402164  255490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:30:27.410501  255490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:30:27.417624  255490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:30:27.424692  255490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:30:27.505005  255490 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:30:27.637887  255490 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:30:27.637966  255490 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:30:27.642010  255490 start.go:564] Will wait 60s for crictl version
	I1120 20:30:27.642066  255490 ssh_runner.go:195] Run: which crictl
	I1120 20:30:27.645848  255490 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:30:27.669633  255490 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:30:27.669735  255490 ssh_runner.go:195] Run: crio --version
	I1120 20:30:27.697787  255490 ssh_runner.go:195] Run: crio --version
	I1120 20:30:27.725654  255490 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:30:27.726985  255490 cli_runner.go:164] Run: docker network inspect addons-658933 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:30:27.744591  255490 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:30:27.748624  255490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:30:27.758674  255490 kubeadm.go:884] updating cluster {Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:30:27.758801  255490 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:30:27.758861  255490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:30:27.790387  255490 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:30:27.790410  255490 crio.go:433] Images already preloaded, skipping extraction
	I1120 20:30:27.790456  255490 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:30:27.815605  255490 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:30:27.815630  255490 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:30:27.815638  255490 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 20:30:27.815730  255490 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-658933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:30:27.815794  255490 ssh_runner.go:195] Run: crio config
	I1120 20:30:27.859258  255490 cni.go:84] Creating CNI manager for ""
	I1120 20:30:27.859277  255490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:30:27.859299  255490 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:30:27.859335  255490 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-658933 NodeName:addons-658933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:30:27.859496  255490 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-658933"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:30:27.859572  255490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:30:27.867802  255490 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:30:27.867888  255490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 20:30:27.875613  255490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:30:27.887750  255490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:30:27.902065  255490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1120 20:30:27.914385  255490 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1120 20:30:27.918098  255490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:30:27.927525  255490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:30:28.008342  255490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:30:28.036018  255490 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933 for IP: 192.168.49.2
	I1120 20:30:28.036041  255490 certs.go:195] generating shared ca certs ...
	I1120 20:30:28.036057  255490 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:28.036178  255490 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:30:28.466205  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt ...
	I1120 20:30:28.466244  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt: {Name:mk6f97ec9583eb89bfd69ef395c34ff3ea55f3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:28.466473  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key ...
	I1120 20:30:28.466491  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key: {Name:mk3e7e29a295b7f6ffe6a8667dd55d70340288c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:28.466634  255490 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:30:29.048529  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt ...
	I1120 20:30:29.048572  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt: {Name:mk52c606e9c73320afcb1e218858dd869c111ce4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.048820  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key ...
	I1120 20:30:29.048841  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key: {Name:mk9df92e14340f94ea29b58a77daf340bce4f983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.048963  255490 certs.go:257] generating profile certs ...
	I1120 20:30:29.049050  255490 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.key
	I1120 20:30:29.049069  255490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt with IP's: []
	I1120 20:30:29.301284  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt ...
	I1120 20:30:29.301317  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: {Name:mkd85bb771caeeaf317adc1d90008b021a4c8bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.301534  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.key ...
	I1120 20:30:29.301552  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.key: {Name:mk9eaffcdf9a4d8d133011c84aa665656203b92b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.301667  255490 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8
	I1120 20:30:29.301697  255490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1120 20:30:29.765881  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8 ...
	I1120 20:30:29.765924  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8: {Name:mkdf619e1dbbaac171eec8a1e6b70761a2885c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.766158  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8 ...
	I1120 20:30:29.766180  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8: {Name:mk5e4e602db673c2658bfd554a33054d1ef58bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.766321  255490 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt.941c89b8 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt
	I1120 20:30:29.766443  255490 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key.941c89b8 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key
	I1120 20:30:29.766534  255490 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key
	I1120 20:30:29.766570  255490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt with IP's: []
	I1120 20:30:29.934993  255490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt ...
	I1120 20:30:29.935036  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt: {Name:mk5f4d6904f51630b72384744580871b1ec140f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.935267  255490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key ...
	I1120 20:30:29.935286  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key: {Name:mkfe77cc656afbd3ae5eab9d2a938dae5a390e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:29.935493  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:30:29.935534  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:30:29.935573  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:30:29.935604  255490 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:30:29.936350  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:30:29.955079  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:30:29.972691  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:30:29.990062  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:30:30.007727  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 20:30:30.024466  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:30:30.040969  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:30:30.057938  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 20:30:30.074469  255490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:30:30.092814  255490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:30:30.104853  255490 ssh_runner.go:195] Run: openssl version
	I1120 20:30:30.110923  255490 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.117948  255490 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:30:30.127491  255490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.131199  255490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.131268  255490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:30:30.165031  255490 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:30:30.172953  255490 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 20:30:30.180264  255490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:30:30.183762  255490 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:30:30.183816  255490 kubeadm.go:401] StartCluster: {Name:addons-658933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-658933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:30:30.183910  255490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:30:30.183958  255490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:30:30.211077  255490 cri.go:89] found id: ""
	I1120 20:30:30.211141  255490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:30:30.219578  255490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 20:30:30.227183  255490 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 20:30:30.227256  255490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 20:30:30.234883  255490 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 20:30:30.234900  255490 kubeadm.go:158] found existing configuration files:
	
	I1120 20:30:30.234941  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 20:30:30.242205  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 20:30:30.242279  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 20:30:30.249521  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 20:30:30.256865  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 20:30:30.256931  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 20:30:30.263941  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 20:30:30.271617  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 20:30:30.271690  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 20:30:30.280239  255490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 20:30:30.288812  255490 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 20:30:30.288887  255490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 20:30:30.297096  255490 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 20:30:30.359287  255490 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 20:30:30.418534  255490 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 20:30:39.395515  255490 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 20:30:39.395629  255490 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 20:30:39.395738  255490 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 20:30:39.395841  255490 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 20:30:39.395876  255490 kubeadm.go:319] OS: Linux
	I1120 20:30:39.395926  255490 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 20:30:39.395979  255490 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 20:30:39.396027  255490 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 20:30:39.396070  255490 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 20:30:39.396118  255490 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 20:30:39.396195  255490 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 20:30:39.396324  255490 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 20:30:39.396366  255490 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 20:30:39.396463  255490 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 20:30:39.396640  255490 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 20:30:39.396765  255490 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 20:30:39.396857  255490 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 20:30:39.398308  255490 out.go:252]   - Generating certificates and keys ...
	I1120 20:30:39.398397  255490 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 20:30:39.398507  255490 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 20:30:39.398606  255490 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 20:30:39.398692  255490 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 20:30:39.398771  255490 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 20:30:39.398841  255490 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 20:30:39.398917  255490 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 20:30:39.399051  255490 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-658933 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 20:30:39.399094  255490 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 20:30:39.399205  255490 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-658933 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1120 20:30:39.399291  255490 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 20:30:39.399359  255490 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 20:30:39.399398  255490 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 20:30:39.399450  255490 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 20:30:39.399494  255490 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 20:30:39.399543  255490 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 20:30:39.399596  255490 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 20:30:39.399654  255490 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 20:30:39.399701  255490 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 20:30:39.399770  255490 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 20:30:39.399832  255490 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 20:30:39.400997  255490 out.go:252]   - Booting up control plane ...
	I1120 20:30:39.401080  255490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 20:30:39.401150  255490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 20:30:39.401240  255490 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 20:30:39.401382  255490 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 20:30:39.401464  255490 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 20:30:39.401558  255490 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 20:30:39.401650  255490 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 20:30:39.401720  255490 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 20:30:39.401886  255490 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 20:30:39.402022  255490 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 20:30:39.402093  255490 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.91805ms
	I1120 20:30:39.402235  255490 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 20:30:39.402312  255490 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1120 20:30:39.402399  255490 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 20:30:39.402468  255490 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 20:30:39.402531  255490 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.521604309s
	I1120 20:30:39.402589  255490 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.317842825s
	I1120 20:30:39.402653  255490 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001491631s
	I1120 20:30:39.402743  255490 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 20:30:39.402899  255490 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 20:30:39.402998  255490 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 20:30:39.403204  255490 kubeadm.go:319] [mark-control-plane] Marking the node addons-658933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 20:30:39.403305  255490 kubeadm.go:319] [bootstrap-token] Using token: 3wjd0t.465tl4dd1yvzdt5n
	I1120 20:30:39.405321  255490 out.go:252]   - Configuring RBAC rules ...
	I1120 20:30:39.405460  255490 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 20:30:39.405565  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 20:30:39.405724  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 20:30:39.405836  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 20:30:39.405968  255490 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 20:30:39.406063  255490 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 20:30:39.406160  255490 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 20:30:39.406241  255490 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 20:30:39.406298  255490 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 20:30:39.406307  255490 kubeadm.go:319] 
	I1120 20:30:39.406386  255490 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 20:30:39.406395  255490 kubeadm.go:319] 
	I1120 20:30:39.406527  255490 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 20:30:39.406547  255490 kubeadm.go:319] 
	I1120 20:30:39.406582  255490 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 20:30:39.406664  255490 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 20:30:39.406724  255490 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 20:30:39.406732  255490 kubeadm.go:319] 
	I1120 20:30:39.406792  255490 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 20:30:39.406800  255490 kubeadm.go:319] 
	I1120 20:30:39.406842  255490 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 20:30:39.406851  255490 kubeadm.go:319] 
	I1120 20:30:39.406905  255490 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 20:30:39.406995  255490 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 20:30:39.407055  255490 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 20:30:39.407058  255490 kubeadm.go:319] 
	I1120 20:30:39.407127  255490 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 20:30:39.407211  255490 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 20:30:39.407231  255490 kubeadm.go:319] 
	I1120 20:30:39.407300  255490 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3wjd0t.465tl4dd1yvzdt5n \
	I1120 20:30:39.407394  255490 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d \
	I1120 20:30:39.407422  255490 kubeadm.go:319] 	--control-plane 
	I1120 20:30:39.407426  255490 kubeadm.go:319] 
	I1120 20:30:39.407548  255490 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 20:30:39.407558  255490 kubeadm.go:319] 
	I1120 20:30:39.407653  255490 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3wjd0t.465tl4dd1yvzdt5n \
	I1120 20:30:39.407800  255490 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d 
	I1120 20:30:39.407814  255490 cni.go:84] Creating CNI manager for ""
	I1120 20:30:39.407822  255490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:30:39.409798  255490 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 20:30:39.410818  255490 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 20:30:39.415289  255490 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 20:30:39.415307  255490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 20:30:39.428329  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 20:30:39.631760  255490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 20:30:39.631846  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:39.631940  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-658933 minikube.k8s.io/updated_at=2025_11_20T20_30_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=addons-658933 minikube.k8s.io/primary=true
	I1120 20:30:39.716969  255490 ops.go:34] apiserver oom_adj: -16
	I1120 20:30:39.717090  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:40.218175  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:40.717186  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:41.218026  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:41.717832  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:42.218081  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:42.717840  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:43.217376  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:43.718191  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:44.217589  255490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 20:30:44.287544  255490 kubeadm.go:1114] duration metric: took 4.655753879s to wait for elevateKubeSystemPrivileges
	I1120 20:30:44.287587  255490 kubeadm.go:403] duration metric: took 14.103777939s to StartCluster
	I1120 20:30:44.287615  255490 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:44.287768  255490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:30:44.288172  255490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:30:44.288399  255490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 20:30:44.288427  255490 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:30:44.288497  255490 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1120 20:30:44.288644  255490 addons.go:70] Setting yakd=true in profile "addons-658933"
	I1120 20:30:44.288669  255490 addons.go:239] Setting addon yakd=true in "addons-658933"
	I1120 20:30:44.288697  255490 addons.go:70] Setting registry-creds=true in profile "addons-658933"
	I1120 20:30:44.288696  255490 addons.go:70] Setting inspektor-gadget=true in profile "addons-658933"
	I1120 20:30:44.288723  255490 addons.go:239] Setting addon registry-creds=true in "addons-658933"
	I1120 20:30:44.288725  255490 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-658933"
	I1120 20:30:44.288725  255490 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:30:44.288720  255490 addons.go:70] Setting default-storageclass=true in profile "addons-658933"
	I1120 20:30:44.288739  255490 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-658933"
	I1120 20:30:44.288745  255490 addons.go:70] Setting metrics-server=true in profile "addons-658933"
	I1120 20:30:44.288753  255490 addons.go:70] Setting ingress=true in profile "addons-658933"
	I1120 20:30:44.288754  255490 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-658933"
	I1120 20:30:44.288706  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.288763  255490 addons.go:239] Setting addon metrics-server=true in "addons-658933"
	I1120 20:30:44.288771  255490 addons.go:70] Setting ingress-dns=true in profile "addons-658933"
	I1120 20:30:44.288783  255490 addons.go:239] Setting addon ingress-dns=true in "addons-658933"
	I1120 20:30:44.288796  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.288813  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.289160  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289170  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289349  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289352  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.289462  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.288745  255490 addons.go:70] Setting gcp-auth=true in profile "addons-658933"
	I1120 20:30:44.289623  255490 mustload.go:66] Loading cluster: addons-658933
	I1120 20:30:44.289778  255490 addons.go:70] Setting volcano=true in profile "addons-658933"
	I1120 20:30:44.289796  255490 addons.go:239] Setting addon volcano=true in "addons-658933"
	I1120 20:30:44.289815  255490 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:30:44.289828  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.289849  255490 addons.go:70] Setting cloud-spanner=true in profile "addons-658933"
	I1120 20:30:44.289871  255490 addons.go:239] Setting addon cloud-spanner=true in "addons-658933"
	I1120 20:30:44.289913  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.290072  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.290288  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.290429  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.288764  255490 addons.go:239] Setting addon ingress=true in "addons-658933"
	I1120 20:30:44.293080  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.288716  255490 addons.go:70] Setting storage-provisioner=true in profile "addons-658933"
	I1120 20:30:44.293429  255490 addons.go:239] Setting addon storage-provisioner=true in "addons-658933"
	I1120 20:30:44.293472  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.293643  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.294012  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.288757  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.290446  255490 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-658933"
	I1120 20:30:44.290464  255490 addons.go:70] Setting volumesnapshots=true in profile "addons-658933"
	I1120 20:30:44.290480  255490 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-658933"
	I1120 20:30:44.290492  255490 addons.go:70] Setting registry=true in profile "addons-658933"
	I1120 20:30:44.288728  255490 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-658933"
	I1120 20:30:44.288733  255490 addons.go:239] Setting addon inspektor-gadget=true in "addons-658933"
	I1120 20:30:44.291508  255490 out.go:179] * Verifying Kubernetes components...
	I1120 20:30:44.294358  255490 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-658933"
	I1120 20:30:44.295346  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.294986  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.296198  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.295055  255490 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-658933"
	I1120 20:30:44.301281  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.301333  255490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:30:44.295067  255490 addons.go:239] Setting addon volumesnapshots=true in "addons-658933"
	I1120 20:30:44.301581  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.295079  255490 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-658933"
	I1120 20:30:44.301684  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.301761  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.302133  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.302169  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.295109  255490 addons.go:239] Setting addon registry=true in "addons-658933"
	I1120 20:30:44.302256  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.295145  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.305972  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.306928  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.340707  255490 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1120 20:30:44.342975  255490 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:30:44.343000  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1120 20:30:44.343094  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.356746  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1120 20:30:44.359069  255490 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1120 20:30:44.360315  255490 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1120 20:30:44.360315  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:30:44.361396  255490 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:30:44.361419  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1120 20:30:44.361477  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.363554  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:30:44.363898  255490 addons.go:239] Setting addon default-storageclass=true in "addons-658933"
	I1120 20:30:44.363946  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.365235  255490 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:30:44.365256  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1120 20:30:44.365314  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.369630  255490 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1120 20:30:44.370336  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.372298  255490 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:30:44.372322  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1120 20:30:44.372380  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.372856  255490 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:30:44.372872  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1120 20:30:44.372920  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.373595  255490 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1120 20:30:44.383200  255490 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-658933"
	I1120 20:30:44.389843  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.383802  255490 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1120 20:30:44.390391  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:44.384799  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1120 20:30:44.390434  255490 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1120 20:30:44.390504  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.394277  255490 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1120 20:30:44.397389  255490 out.go:179]   - Using image docker.io/registry:3.0.0
	I1120 20:30:44.397755  255490 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1120 20:30:44.398687  255490 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1120 20:30:44.398712  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1120 20:30:44.398779  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.398975  255490 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1120 20:30:44.399013  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1120 20:30:44.399093  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.399320  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1120 20:30:44.399332  255490 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1120 20:30:44.399377  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.399580  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1120 20:30:44.400624  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1120 20:30:44.400651  255490 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	W1120 20:30:44.400662  255490 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1120 20:30:44.400715  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.404800  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:44.420607  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1120 20:30:44.422936  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1120 20:30:44.424248  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1120 20:30:44.425355  255490 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 20:30:44.426548  255490 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:30:44.426569  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 20:30:44.426635  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.426845  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1120 20:30:44.430771  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1120 20:30:44.430933  255490 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 20:30:44.430998  255490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 20:30:44.431246  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.433803  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1120 20:30:44.434043  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.437710  255490 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1120 20:30:44.439018  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1120 20:30:44.439142  255490 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:30:44.439156  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1120 20:30:44.439224  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.443476  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.443544  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.444710  255490 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1120 20:30:44.445404  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.445815  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1120 20:30:44.445834  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1120 20:30:44.445901  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.466449  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.466798  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.466848  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.470817  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.473356  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.476614  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.481996  255490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 20:30:44.482475  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.485242  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.485283  255490 retry.go:31] will retry after 155.42709ms: ssh: handshake failed: EOF
	I1120 20:30:44.493808  255490 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1120 20:30:44.494404  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:44.496714  255490 out.go:179]   - Using image docker.io/busybox:stable
	I1120 20:30:44.498797  255490 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:30:44.498881  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1120 20:30:44.498977  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:44.504727  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.506552  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.506894  255490 retry.go:31] will retry after 201.636658ms: ssh: handshake failed: EOF
	I1120 20:30:44.516460  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.517611  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.517692  255490 retry.go:31] will retry after 374.650461ms: ssh: handshake failed: EOF
	I1120 20:30:44.518569  255490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:30:44.538611  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	W1120 20:30:44.540447  255490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1120 20:30:44.540483  255490 retry.go:31] will retry after 337.040085ms: ssh: handshake failed: EOF
	I1120 20:30:44.608085  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1120 20:30:44.625959  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1120 20:30:44.636133  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1120 20:30:44.648083  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1120 20:30:44.648111  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1120 20:30:44.664850  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1120 20:30:44.664884  255490 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1120 20:30:44.665052  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1120 20:30:44.665734  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1120 20:30:44.669274  255490 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1120 20:30:44.669297  255490 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1120 20:30:44.681348  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1120 20:30:44.681381  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1120 20:30:44.687355  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 20:30:44.688633  255490 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1120 20:30:44.688657  255490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1120 20:30:44.693958  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1120 20:30:44.693979  255490 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1120 20:30:44.703806  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1120 20:30:44.703904  255490 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1120 20:30:44.705656  255490 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:30:44.705675  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1120 20:30:44.717708  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1120 20:30:44.717807  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1120 20:30:44.728532  255490 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1120 20:30:44.728627  255490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1120 20:30:44.739499  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1120 20:30:44.740657  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1120 20:30:44.740678  255490 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1120 20:30:44.749150  255490 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:30:44.749183  255490 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1120 20:30:44.760005  255490 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1120 20:30:44.760043  255490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1120 20:30:44.765594  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1120 20:30:44.765621  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1120 20:30:44.779557  255490 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:30:44.779594  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1120 20:30:44.787155  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1120 20:30:44.793924  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1120 20:30:44.794021  255490 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1120 20:30:44.803069  255490 node_ready.go:35] waiting up to 6m0s for node "addons-658933" to be "Ready" ...
	I1120 20:30:44.803646  255490 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1120 20:30:44.806578  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1120 20:30:44.806600  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1120 20:30:44.818564  255490 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:30:44.818590  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1120 20:30:44.840236  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1120 20:30:44.875691  255490 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1120 20:30:44.875719  255490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1120 20:30:44.886668  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1120 20:30:44.901027  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1120 20:30:44.921529  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 20:30:44.945247  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1120 20:30:44.945346  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1120 20:30:45.038393  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1120 20:30:45.038486  255490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1120 20:30:45.110033  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1120 20:30:45.110059  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1120 20:30:45.112358  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1120 20:30:45.118049  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1120 20:30:45.149133  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1120 20:30:45.149168  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1120 20:30:45.194796  255490 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:30:45.194850  255490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1120 20:30:45.252975  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1120 20:30:45.313308  255490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-658933" context rescaled to 1 replicas
	I1120 20:30:45.956010  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.319832842s)
	I1120 20:30:45.956123  255490 addons.go:480] Verifying addon ingress=true in "addons-658933"
	I1120 20:30:45.956235  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.290454836s)
	I1120 20:30:45.956366  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.216841507s)
	I1120 20:30:45.956382  255490 addons.go:480] Verifying addon registry=true in "addons-658933"
	I1120 20:30:45.956137  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.291050342s)
	I1120 20:30:45.956331  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.268950251s)
	I1120 20:30:45.956530  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.169270174s)
	I1120 20:30:45.956545  255490 addons.go:480] Verifying addon metrics-server=true in "addons-658933"
	I1120 20:30:45.956604  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.115169622s)
	I1120 20:30:45.957854  255490 out.go:179] * Verifying ingress addon...
	I1120 20:30:45.957888  255490 out.go:179] * Verifying registry addon...
	I1120 20:30:45.960418  255490 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-658933 service yakd-dashboard -n yakd-dashboard
	
	I1120 20:30:45.962275  255490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1120 20:30:45.962275  255490 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1120 20:30:45.965232  255490 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:30:45.965315  255490 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:30:45.965329  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:46.371380  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.484574379s)
	W1120 20:30:46.371426  255490 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:30:46.371453  255490 retry.go:31] will retry after 250.434733ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1120 20:30:46.371449  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.470280942s)
	I1120 20:30:46.371497  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.449929791s)
	I1120 20:30:46.371546  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.259163755s)
	I1120 20:30:46.371589  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.253515744s)
	I1120 20:30:46.371836  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.118810669s)
	I1120 20:30:46.371861  255490 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-658933"
	I1120 20:30:46.373188  255490 out.go:179] * Verifying csi-hostpath-driver addon...
	I1120 20:30:46.375253  255490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1120 20:30:46.378188  255490 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:30:46.378209  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1120 20:30:46.380074  255490 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1120 20:30:46.479294  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:46.479401  255490 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1120 20:30:46.479415  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:46.622528  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1120 20:30:46.806760  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:46.878871  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:46.965738  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:46.965939  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:47.378658  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:47.479446  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:47.479516  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:47.878841  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:47.965550  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:47.965638  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:48.378538  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:48.478921  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:48.479116  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:48.879357  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:48.965910  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:48.966139  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:49.113049  255490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.490474653s)
	W1120 20:30:49.306531  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:49.379072  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:49.479567  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:49.479719  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:49.878714  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:49.965738  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:49.965738  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:50.378645  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:50.479558  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:50.479780  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:50.877983  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:50.965503  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:50.965728  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1120 20:30:51.306808  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:51.379107  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:51.480176  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:51.480376  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:51.878787  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:51.965534  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:51.965747  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:52.012397  255490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1120 20:30:52.012456  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:52.031008  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:52.139277  255490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1120 20:30:52.152717  255490 addons.go:239] Setting addon gcp-auth=true in "addons-658933"
	I1120 20:30:52.152768  255490 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:30:52.153133  255490 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:30:52.170737  255490 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1120 20:30:52.170801  255490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:30:52.188332  255490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:30:52.280947  255490 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1120 20:30:52.282264  255490 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1120 20:30:52.283387  255490 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1120 20:30:52.283411  255490 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1120 20:30:52.297064  255490 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1120 20:30:52.297098  255490 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1120 20:30:52.310540  255490 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:30:52.310560  255490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1120 20:30:52.322756  255490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1120 20:30:52.379105  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:52.465771  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:52.465961  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:52.633423  255490 addons.go:480] Verifying addon gcp-auth=true in "addons-658933"
	I1120 20:30:52.635628  255490 out.go:179] * Verifying gcp-auth addon...
	I1120 20:30:52.637645  255490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1120 20:30:52.640152  255490 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1120 20:30:52.640173  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:52.879120  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:52.965912  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:52.966120  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:53.140649  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:53.378517  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:53.465361  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:53.465532  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:53.640958  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1120 20:30:53.807014  255490 node_ready.go:57] node "addons-658933" has "Ready":"False" status (will retry)
	I1120 20:30:53.878808  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:53.965738  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:53.965765  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:54.140536  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:54.378571  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:54.465306  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:54.465508  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:54.641426  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:54.878834  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:54.965886  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:54.965948  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:55.140886  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:55.378515  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:55.465729  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:55.465788  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:55.643587  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:55.805975  255490 node_ready.go:49] node "addons-658933" is "Ready"
	I1120 20:30:55.806014  255490 node_ready.go:38] duration metric: took 11.00289094s for node "addons-658933" to be "Ready" ...
	I1120 20:30:55.806034  255490 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:30:55.806097  255490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:30:55.822010  255490 api_server.go:72] duration metric: took 11.533542492s to wait for apiserver process to appear ...
	I1120 20:30:55.822037  255490 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:30:55.822067  255490 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:30:55.826210  255490 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:30:55.827161  255490 api_server.go:141] control plane version: v1.34.1
	I1120 20:30:55.827186  255490 api_server.go:131] duration metric: took 5.142237ms to wait for apiserver health ...
	I1120 20:30:55.827197  255490 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:30:55.831306  255490 system_pods.go:59] 20 kube-system pods found
	I1120 20:30:55.831341  255490 system_pods.go:61] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending
	I1120 20:30:55.831354  255490 system_pods.go:61] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:55.831363  255490 system_pods.go:61] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:55.831373  255490 system_pods.go:61] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:55.831382  255490 system_pods.go:61] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:55.831394  255490 system_pods.go:61] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:55.831400  255490 system_pods.go:61] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:55.831405  255490 system_pods.go:61] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:55.831409  255490 system_pods.go:61] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:55.831417  255490 system_pods.go:61] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:55.831422  255490 system_pods.go:61] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:55.831427  255490 system_pods.go:61] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:55.831436  255490 system_pods.go:61] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:55.831444  255490 system_pods.go:61] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:55.831451  255490 system_pods.go:61] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:55.831459  255490 system_pods.go:61] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:55.831465  255490 system_pods.go:61] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending
	I1120 20:30:55.831472  255490 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:55.831481  255490 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending
	I1120 20:30:55.831489  255490 system_pods.go:61] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:55.831498  255490 system_pods.go:74] duration metric: took 4.293354ms to wait for pod list to return data ...
	I1120 20:30:55.831510  255490 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:30:55.833792  255490 default_sa.go:45] found service account: "default"
	I1120 20:30:55.833810  255490 default_sa.go:55] duration metric: took 2.294915ms for default service account to be created ...
	I1120 20:30:55.833818  255490 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:30:55.837093  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:55.837126  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending
	I1120 20:30:55.837141  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:55.837151  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:55.837163  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:55.837173  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:55.837186  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:55.837195  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:55.837202  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:55.837209  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:55.837231  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:55.837241  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:55.837255  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:55.837265  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:55.837274  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:55.837297  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:55.837309  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:55.837322  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending
	I1120 20:30:55.837331  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:55.837341  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending
	I1120 20:30:55.837350  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:55.837371  255490 retry.go:31] will retry after 233.691892ms: missing components: kube-dns
	I1120 20:30:55.878700  255490 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1120 20:30:55.878723  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:55.979841  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:55.980337  255490 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1120 20:30:55.980356  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:56.082886  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:56.082932  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:56.082942  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:56.082953  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:56.082960  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:56.082970  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:56.082977  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:56.082984  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:56.082991  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:56.082996  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:56.083005  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:56.083011  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:56.083017  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:56.083030  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:56.083041  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:56.083054  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:56.083062  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:56.083069  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:56.083081  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.083091  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.083099  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:56.083123  255490 retry.go:31] will retry after 284.764079ms: missing components: kube-dns
	I1120 20:30:56.179618  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:56.372482  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:56.372515  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:56.372524  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:56.372530  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:56.372536  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:56.372542  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:56.372546  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:56.372550  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:56.372554  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:56.372557  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:56.372562  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:56.372566  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:56.372569  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:56.372575  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:56.372582  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:56.372587  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:56.372593  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:56.372599  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:56.372606  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.372612  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.372620  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:56.372635  255490 retry.go:31] will retry after 300.095602ms: missing components: kube-dns
	I1120 20:30:56.377851  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:56.465605  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:56.465629  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:56.642003  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:56.677833  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:56.677877  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:56.677890  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 20:30:56.677902  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:56.677910  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:56.677919  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:56.677924  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:56.677932  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:56.677941  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:56.677947  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:56.677958  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:56.677969  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:56.677982  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:56.677991  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:56.678004  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:56.678012  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:56.678022  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:56.678029  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:56.678040  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.678050  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:56.678060  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 20:30:56.678082  255490 retry.go:31] will retry after 380.404175ms: missing components: kube-dns
	I1120 20:30:56.880262  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:56.981309  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:56.981431  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:57.064071  255490 system_pods.go:86] 20 kube-system pods found
	I1120 20:30:57.064112  255490 system_pods.go:89] "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1120 20:30:57.064120  255490 system_pods.go:89] "coredns-66bc5c9577-zbjpk" [a3a7d536-23ee-4e63-baf9-ad47db0d7bbf] Running
	I1120 20:30:57.064132  255490 system_pods.go:89] "csi-hostpath-attacher-0" [48bb63de-0aef-4485-b006-c44eb47c8bcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1120 20:30:57.064140  255490 system_pods.go:89] "csi-hostpath-resizer-0" [4680f16e-c113-4f04-b0c1-d54828d41c80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1120 20:30:57.064171  255490 system_pods.go:89] "csi-hostpathplugin-z7dj2" [3258bfe1-e384-479c-b66c-10c3fc16ef57] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1120 20:30:57.064182  255490 system_pods.go:89] "etcd-addons-658933" [bd21e971-3c09-47cb-915c-6e5a8ead3b0d] Running
	I1120 20:30:57.064189  255490 system_pods.go:89] "kindnet-46wwr" [b7a18160-7629-4b07-bb64-857c9a7308c2] Running
	I1120 20:30:57.064198  255490 system_pods.go:89] "kube-apiserver-addons-658933" [d1fd4a31-1dd2-48aa-a76e-e93d47e8efb6] Running
	I1120 20:30:57.064204  255490 system_pods.go:89] "kube-controller-manager-addons-658933" [d6788b0b-c3f3-4c47-9652-d739f8d5573f] Running
	I1120 20:30:57.064225  255490 system_pods.go:89] "kube-ingress-dns-minikube" [7e145649-9f14-4c8c-aae7-2207d47ab7cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1120 20:30:57.064234  255490 system_pods.go:89] "kube-proxy-tkd84" [edf626f1-58b5-44d7-a122-816fdea159ac] Running
	I1120 20:30:57.064241  255490 system_pods.go:89] "kube-scheduler-addons-658933" [45128021-d6b3-428c-b487-8f837b7e3277] Running
	I1120 20:30:57.064250  255490 system_pods.go:89] "metrics-server-85b7d694d7-z2pc4" [edc0b725-f396-46c7-b1d7-7fbb46f3a01f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1120 20:30:57.064260  255490 system_pods.go:89] "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1120 20:30:57.064271  255490 system_pods.go:89] "registry-6b586f9694-zwcwl" [c148d5cc-6395-4d59-83c2-d4ba492c53f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1120 20:30:57.064280  255490 system_pods.go:89] "registry-creds-764b6fb674-h47jz" [730db9ac-a022-4b3f-a29d-60b579072144] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1120 20:30:57.064292  255490 system_pods.go:89] "registry-proxy-lq2h5" [e7af5bec-27ba-4ef1-85bb-8e43373b8672] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1120 20:30:57.064301  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7fv92" [c5b18b0f-7795-443f-9238-860ab5fe9c6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:57.064313  255490 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bxn2q" [0525cf22-4c19-454d-9c4a-b67435fa3e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1120 20:30:57.064319  255490 system_pods.go:89] "storage-provisioner" [15163a84-42c7-4255-a2df-a8b6cb661252] Running
	I1120 20:30:57.064334  255490 system_pods.go:126] duration metric: took 1.230509144s to wait for k8s-apps to be running ...
	I1120 20:30:57.064347  255490 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:30:57.064407  255490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:30:57.080958  255490 system_svc.go:56] duration metric: took 16.60059ms WaitForService to wait for kubelet
	I1120 20:30:57.080993  255490 kubeadm.go:587] duration metric: took 12.79253115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:30:57.081019  255490 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:30:57.084381  255490 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:30:57.084414  255490 node_conditions.go:123] node cpu capacity is 8
	I1120 20:30:57.084433  255490 node_conditions.go:105] duration metric: took 3.407997ms to run NodePressure ...
	I1120 20:30:57.084450  255490 start.go:242] waiting for startup goroutines ...
	I1120 20:30:57.141322  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:57.378668  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:57.465749  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:57.465796  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:57.641452  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:57.878664  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:57.965359  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:57.965407  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:58.141706  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:58.379207  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:58.467832  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:58.467859  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:58.640995  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:58.879536  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:58.966611  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:58.966985  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:59.141696  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:59.379345  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:59.466875  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:59.466957  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:30:59.643733  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:30:59.879237  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:30:59.966151  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:30:59.966369  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:00.141266  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:00.378693  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:00.465580  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:00.465615  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:00.641643  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:00.878740  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:00.965814  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:00.965899  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:01.142276  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:01.379527  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:01.467961  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:01.468294  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:01.641502  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:01.879106  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:01.967199  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:01.967396  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:02.141394  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:02.379607  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:02.465725  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:02.465784  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:02.641757  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:02.878862  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:02.965741  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:02.965815  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:03.140943  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:03.379765  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:03.465982  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:03.466022  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:03.641315  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:03.878974  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:03.966371  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:03.966442  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:04.142603  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:04.380193  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:04.466610  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:04.466692  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:04.641125  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:05.012082  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:05.012104  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:05.012239  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:05.140693  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:05.379258  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:05.466043  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:05.466135  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:05.640982  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:05.879539  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:05.965482  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:05.965547  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:06.141057  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:06.379432  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:06.466122  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:06.466259  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:06.641283  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:06.880151  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:06.965996  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:06.966027  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:07.141428  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:07.378588  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:07.465446  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:07.465490  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:07.640922  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:07.880403  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:07.966163  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:07.966318  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:08.141588  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:08.379578  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:08.465745  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:08.466030  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:08.641822  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:08.879513  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:08.965627  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:08.965694  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:09.141852  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:09.379966  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:09.481265  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:09.481365  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:09.641578  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:09.879130  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:09.966006  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:09.966015  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:10.141002  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:10.379779  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:10.466077  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:10.466123  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:10.641159  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:10.981688  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:10.981745  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:10.981991  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:11.142443  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:11.379915  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:11.466110  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:11.466693  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:11.641774  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:11.879258  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:11.966374  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:11.966402  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:12.141454  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:12.379441  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:12.466661  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:12.466680  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:12.640818  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:12.879836  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:12.965465  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:12.965568  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:13.141956  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:13.379443  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:13.465856  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:13.465856  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:13.640998  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:13.879576  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:13.965419  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:13.965417  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:14.141470  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:14.379742  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:14.465858  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:14.465919  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:14.641058  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:14.953461  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:14.976705  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:14.976924  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:15.141842  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:15.379282  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:15.466823  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:15.466802  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:15.640824  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:15.879438  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:15.967509  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:15.967576  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:16.141701  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:16.379778  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:16.465878  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:16.466005  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:16.641690  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:16.879420  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:16.966078  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:16.966335  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:17.141049  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:17.380060  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:17.466196  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:17.466238  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:17.641250  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:17.878716  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:17.966197  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:17.966365  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:18.141346  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:18.380185  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:18.466366  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:18.466433  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:18.641704  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:18.879423  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:18.980124  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:18.980363  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:19.141798  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:19.379365  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:19.465999  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:19.466028  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:19.640945  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:19.879387  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:19.966645  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:19.966849  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:20.141149  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:20.379959  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:20.545520  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:20.545599  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:20.727863  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:20.878770  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:20.965345  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:20.965544  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:21.141504  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:21.378988  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:21.466059  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:21.466194  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:21.641759  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:21.878636  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:21.965347  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:21.965394  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:22.140906  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:22.379120  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:22.466066  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:22.466186  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:22.641549  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:22.879495  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:22.966302  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:22.966306  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:23.140907  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:23.379414  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:23.466100  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:23.466289  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:23.640878  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:23.879445  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:23.966398  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:23.966451  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:24.141406  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:24.379490  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:24.468461  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:24.468667  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:24.641051  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:24.879994  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:24.966362  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:24.966480  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:25.141787  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:25.378802  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:25.466975  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:25.467055  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:25.640975  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:25.942565  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:25.966318  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:25.966506  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:26.141449  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:26.378591  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:26.465790  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:26.465796  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:26.640672  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:26.878851  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:26.966070  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:26.966184  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:27.141361  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:27.380438  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:27.466783  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:27.466990  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:27.641444  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:27.878826  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:27.979801  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:27.979850  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:28.140744  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:28.379055  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:28.466053  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1120 20:31:28.466262  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:28.641303  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:28.879679  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:28.965800  255490 kapi.go:107] duration metric: took 43.003520491s to wait for kubernetes.io/minikube-addons=registry ...
	I1120 20:31:28.965911  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:29.141056  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:29.379115  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:29.466254  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:29.641507  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:29.878636  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:29.965941  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:30.141206  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:30.379857  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:30.477055  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:30.641385  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:30.882411  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:30.966429  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:31.141855  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:31.379745  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:31.465333  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:31.641912  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:31.878750  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:31.967269  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:32.140478  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:32.378472  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:32.466262  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:32.641363  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:32.878949  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:32.967786  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:33.140929  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:33.379057  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:33.466061  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:33.640975  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:33.879276  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:33.966802  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:34.141375  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:34.379084  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:34.466440  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:34.641629  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:34.879379  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:34.966285  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:35.141496  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:35.378932  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:35.465775  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:35.640911  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1120 20:31:35.879933  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:35.968449  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:36.141155  255490 kapi.go:107] duration metric: took 43.503506437s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1120 20:31:36.239922  255490 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-658933 cluster.
	I1120 20:31:36.340891  255490 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1120 20:31:36.406587  255490 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1120 20:31:36.432289  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:36.466587  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:36.879698  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:36.966233  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:37.381934  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:37.465962  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:37.878717  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:37.965558  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:38.378917  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:38.466261  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:38.878887  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:38.965538  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:39.379310  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:39.465857  255490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1120 20:31:39.879910  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:39.966474  255490 kapi.go:107] duration metric: took 54.004199635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1120 20:31:40.378921  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:40.880786  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:41.379335  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:41.879604  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:42.379810  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:42.880290  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:43.379398  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:43.879071  255490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1120 20:31:44.379045  255490 kapi.go:107] duration metric: took 58.003792137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1120 20:31:44.422235  255490 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, storage-provisioner, metrics-server, yakd, cloud-spanner, nvidia-device-plugin, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1120 20:31:44.484100  255490 addons.go:515] duration metric: took 1m0.195604559s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns inspektor-gadget storage-provisioner metrics-server yakd cloud-spanner nvidia-device-plugin default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1120 20:31:44.484183  255490 start.go:247] waiting for cluster config update ...
	I1120 20:31:44.484206  255490 start.go:256] writing updated cluster config ...
	I1120 20:31:44.484522  255490 ssh_runner.go:195] Run: rm -f paused
	I1120 20:31:44.488771  255490 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:31:44.492191  255490 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zbjpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.496661  255490 pod_ready.go:94] pod "coredns-66bc5c9577-zbjpk" is "Ready"
	I1120 20:31:44.496685  255490 pod_ready.go:86] duration metric: took 4.449802ms for pod "coredns-66bc5c9577-zbjpk" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.498585  255490 pod_ready.go:83] waiting for pod "etcd-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.502485  255490 pod_ready.go:94] pod "etcd-addons-658933" is "Ready"
	I1120 20:31:44.502510  255490 pod_ready.go:86] duration metric: took 3.902985ms for pod "etcd-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.504290  255490 pod_ready.go:83] waiting for pod "kube-apiserver-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.508014  255490 pod_ready.go:94] pod "kube-apiserver-addons-658933" is "Ready"
	I1120 20:31:44.508031  255490 pod_ready.go:86] duration metric: took 3.720075ms for pod "kube-apiserver-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.509740  255490 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:44.893180  255490 pod_ready.go:94] pod "kube-controller-manager-addons-658933" is "Ready"
	I1120 20:31:44.893239  255490 pod_ready.go:86] duration metric: took 383.453382ms for pod "kube-controller-manager-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:45.094021  255490 pod_ready.go:83] waiting for pod "kube-proxy-tkd84" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:45.493525  255490 pod_ready.go:94] pod "kube-proxy-tkd84" is "Ready"
	I1120 20:31:45.493553  255490 pod_ready.go:86] duration metric: took 399.502857ms for pod "kube-proxy-tkd84" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:45.692987  255490 pod_ready.go:83] waiting for pod "kube-scheduler-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:46.091911  255490 pod_ready.go:94] pod "kube-scheduler-addons-658933" is "Ready"
	I1120 20:31:46.091940  255490 pod_ready.go:86] duration metric: took 398.925831ms for pod "kube-scheduler-addons-658933" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:31:46.091952  255490 pod_ready.go:40] duration metric: took 1.603149594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:31:46.136272  255490 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 20:31:46.138330  255490 out.go:179] * Done! kubectl is now configured to use "addons-658933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 20:31:43 addons-658933 crio[772]: time="2025-11-20T20:31:43.048530911Z" level=info msg="Starting container: 0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1" id=ae8c9dc6-c9cd-4194-a110-bcf59956ac04 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 20:31:43 addons-658933 crio[772]: time="2025-11-20T20:31:43.05130556Z" level=info msg="Started container" PID=6212 containerID=0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1 description=kube-system/csi-hostpathplugin-z7dj2/csi-snapshotter id=ae8c9dc6-c9cd-4194-a110-bcf59956ac04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00eced653856a1d24e88f07178dea6030d656a058a14219720e47b8c1da2338d
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.003672084Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3b942a7f-ffac-42eb-964e-cb07fe8d0f7c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.003741107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.009438387Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8cc6ad1024aab81b0c96bbc30cafb4867c696ab0857823455d36157534975511 UID:28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4 NetNS:/var/run/netns/e529127b-c17a-443e-b002-1099a5ae7a8c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d02318}] Aliases:map[]}"
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.009466446Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.019192774Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8cc6ad1024aab81b0c96bbc30cafb4867c696ab0857823455d36157534975511 UID:28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4 NetNS:/var/run/netns/e529127b-c17a-443e-b002-1099a5ae7a8c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d02318}] Aliases:map[]}"
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.019356082Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.020128762Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.020969551Z" level=info msg="Ran pod sandbox 8cc6ad1024aab81b0c96bbc30cafb4867c696ab0857823455d36157534975511 with infra container: default/busybox/POD" id=3b942a7f-ffac-42eb-964e-cb07fe8d0f7c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.022271256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e58aaf4d-6260-4070-ba6e-d8405e7b4025 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.022374985Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e58aaf4d-6260-4070-ba6e-d8405e7b4025 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.022405999Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e58aaf4d-6260-4070-ba6e-d8405e7b4025 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.022944599Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fe1e3217-1427-4dfd-a05a-2f28f899cba4 name=/runtime.v1.ImageService/PullImage
	Nov 20 20:31:47 addons-658933 crio[772]: time="2025-11-20T20:31:47.024365097Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.103036376Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fe1e3217-1427-4dfd-a05a-2f28f899cba4 name=/runtime.v1.ImageService/PullImage
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.103565958Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4978094b-e78d-4792-bfef-5a7f526a7795 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.104841227Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8ed022a4-5ecf-4850-bd16-bf36a0397b67 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.108206122Z" level=info msg="Creating container: default/busybox/busybox" id=eeba31c3-7b80-442a-9fcd-2f470e70ce57 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.108340321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.113285432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.113743412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.140128387Z" level=info msg="Created container 5abeb02d2e1daf47b9631e7db2bb76f775fd4bbe71f67d6a3142eca48c76eb1f: default/busybox/busybox" id=eeba31c3-7b80-442a-9fcd-2f470e70ce57 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.140780396Z" level=info msg="Starting container: 5abeb02d2e1daf47b9631e7db2bb76f775fd4bbe71f67d6a3142eca48c76eb1f" id=86808f9c-891c-48e9-a614-23e563e6c15c name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 20:31:49 addons-658933 crio[772]: time="2025-11-20T20:31:49.142574448Z" level=info msg="Started container" PID=6330 containerID=5abeb02d2e1daf47b9631e7db2bb76f775fd4bbe71f67d6a3142eca48c76eb1f description=default/busybox/busybox id=86808f9c-891c-48e9-a614-23e563e6c15c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cc6ad1024aab81b0c96bbc30cafb4867c696ab0857823455d36157534975511
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	5abeb02d2e1da       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   8cc6ad1024aab       busybox                                    default
	0869a7f04bf4e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	c00315bedde6d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	3dc9d9f32ffaa       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	e84ec310b2afd       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	a124ab10918ee       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             18 seconds ago       Running             controller                               0                   78b9702a19e31       ingress-nginx-controller-6c8bf45fb-dsc49   ingress-nginx
	e079a3716f65a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 22 seconds ago       Running             gcp-auth                                 0                   323a6449d1860       gcp-auth-78565c9fb4-vprfm                  gcp-auth
	564d810ba191a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                25 seconds ago       Running             node-driver-registrar                    0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	fddb94943c333       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            26 seconds ago       Running             gadget                                   0                   d41f82cebc1c1       gadget-g5x6v                               gadget
	14e7ea80f3e9e       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             27 seconds ago       Exited              patch                                    2                   c1c72cf21d6f7       gcp-auth-certs-patch-nn66k                 gcp-auth
	ca550c41b2a77       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              29 seconds ago       Running             registry-proxy                           0                   5dfa51a9926d0       registry-proxy-lq2h5                       kube-system
	6362678378ad4       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     32 seconds ago       Running             nvidia-device-plugin-ctr                 0                   4f3f2ab1ef847       nvidia-device-plugin-daemonset-xkkmp       kube-system
	cf2ac7eff8739       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     35 seconds ago       Running             amd-gpu-device-plugin                    0                   3e5156c76b147       amd-gpu-device-plugin-vm8jx                kube-system
	9224e8f92f1b1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   37 seconds ago       Running             csi-external-health-monitor-controller   0                   00eced653856a       csi-hostpathplugin-z7dj2                   kube-system
	620d709913349       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   38 seconds ago       Exited              create                                   0                   cc8697c976213       gcp-auth-certs-create-jtc4g                gcp-auth
	adac6d2c858f2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             38 seconds ago       Running             csi-attacher                             0                   3fd3ed518b255       csi-hostpath-attacher-0                    kube-system
	afe1aac38b026       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      39 seconds ago       Running             volume-snapshot-controller               0                   626d22e42829b       snapshot-controller-7d9fbc56b8-7fv92       kube-system
	a3370293507b7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              39 seconds ago       Running             csi-resizer                              0                   e96a1ab544b70       csi-hostpath-resizer-0                     kube-system
	74e0e28db0b78       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             40 seconds ago       Running             local-path-provisioner                   0                   1a403f5b3750d       local-path-provisioner-648f6765c9-tchwv    local-path-storage
	cfe3cfa35ee4e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      42 seconds ago       Running             volume-snapshot-controller               0                   1412ff6800cb4       snapshot-controller-7d9fbc56b8-bxn2q       kube-system
	ca85a97de8f75       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             43 seconds ago       Exited              patch                                    1                   5183d8f5f49db       ingress-nginx-admission-patch-b4csh        ingress-nginx
	b453b2b6d5746       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   43 seconds ago       Exited              create                                   0                   f187215f51f32       ingress-nginx-admission-create-lwnhv       ingress-nginx
	581473717f5db       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              44 seconds ago       Running             yakd                                     0                   9657ed102c296       yakd-dashboard-5ff678cb9-rk9b8             yakd-dashboard
	6e59cda6b4475       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               47 seconds ago       Running             cloud-spanner-emulator                   0                   ef8fdb25848a8       cloud-spanner-emulator-6f9fcf858b-j7pgx    default
	6d168b8373fd1       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               52 seconds ago       Running             minikube-ingress-dns                     0                   a6cdfb1765ad1       kube-ingress-dns-minikube                  kube-system
	18df77ead4cf8       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           57 seconds ago       Running             registry                                 0                   59c5bb1453517       registry-6b586f9694-zwcwl                  kube-system
	2dc54febfd287       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   ffb3f956f8953       metrics-server-85b7d694d7-z2pc4            kube-system
	c812b6447964f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   626c2e8eee40e       coredns-66bc5c9577-zbjpk                   kube-system
	b9c2a6d4679fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   3f14ab61d1b19       storage-provisioner                        kube-system
	2f3f9b31aedbb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   e3ae50b3edc41       kube-proxy-tkd84                           kube-system
	cb4964e2e68f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   9173d8b0919c7       kindnet-46wwr                              kube-system
	c51ec37256def       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   4a697d374c6ab       kube-controller-manager-addons-658933      kube-system
	b69462e9ce88e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   5eb8d48f67cd9       kube-scheduler-addons-658933               kube-system
	6d905baa8985b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   0cba80a333c64       etcd-addons-658933                         kube-system
	4038552b2ad49       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   6f6ca67588f9d       kube-apiserver-addons-658933               kube-system
	
	
	==> coredns [c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7] <==
	[INFO] 10.244.0.19:52850 - 10942 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003505895s
	[INFO] 10.244.0.19:50074 - 49908 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000093132s
	[INFO] 10.244.0.19:50074 - 49587 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000137398s
	[INFO] 10.244.0.19:41040 - 31502 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000079919s
	[INFO] 10.244.0.19:41040 - 31185 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000088769s
	[INFO] 10.244.0.19:50672 - 48216 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000054827s
	[INFO] 10.244.0.19:50672 - 48490 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000107855s
	[INFO] 10.244.0.19:58400 - 665 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107521s
	[INFO] 10.244.0.19:58400 - 450 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148229s
	[INFO] 10.244.0.22:60685 - 29723 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000202392s
	[INFO] 10.244.0.22:58184 - 13248 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197622s
	[INFO] 10.244.0.22:54148 - 64370 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135686s
	[INFO] 10.244.0.22:39590 - 12704 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168738s
	[INFO] 10.244.0.22:35953 - 21800 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151055s
	[INFO] 10.244.0.22:46482 - 2688 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000202854s
	[INFO] 10.244.0.22:36455 - 55600 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003478896s
	[INFO] 10.244.0.22:51881 - 16428 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00354676s
	[INFO] 10.244.0.22:56240 - 16100 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005341664s
	[INFO] 10.244.0.22:53948 - 51258 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00657152s
	[INFO] 10.244.0.22:36503 - 63266 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004683085s
	[INFO] 10.244.0.22:43082 - 58810 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004787482s
	[INFO] 10.244.0.22:34122 - 64455 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003849588s
	[INFO] 10.244.0.22:35376 - 41719 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005227255s
	[INFO] 10.244.0.22:55418 - 26939 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000887284s
	[INFO] 10.244.0.22:33424 - 36150 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00106945s
	
	
	==> describe nodes <==
	Name:               addons-658933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-658933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=addons-658933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_30_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-658933
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-658933"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:30:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-658933
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:31:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:31:40 +0000   Thu, 20 Nov 2025 20:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:31:40 +0000   Thu, 20 Nov 2025 20:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:31:40 +0000   Thu, 20 Nov 2025 20:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:31:40 +0000   Thu, 20 Nov 2025 20:30:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-658933
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                9c80e830-a2e4-4134-9f57-97b54019831a
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-6f9fcf858b-j7pgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  gadget                      gadget-g5x6v                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  gcp-auth                    gcp-auth-78565c9fb4-vprfm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-dsc49    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         72s
	  kube-system                 amd-gpu-device-plugin-vm8jx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 coredns-66bc5c9577-zbjpk                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     73s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 csi-hostpathplugin-z7dj2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 etcd-addons-658933                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         79s
	  kube-system                 kindnet-46wwr                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-addons-658933                250m (3%)     0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-addons-658933       200m (2%)     0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-tkd84                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-addons-658933                100m (1%)     0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 metrics-server-85b7d694d7-z2pc4             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         72s
	  kube-system                 nvidia-device-plugin-daemonset-xkkmp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 registry-6b586f9694-zwcwl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 registry-creds-764b6fb674-h47jz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 registry-proxy-lq2h5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 snapshot-controller-7d9fbc56b8-7fv92        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 snapshot-controller-7d9fbc56b8-bxn2q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  local-path-storage          local-path-provisioner-648f6765c9-tchwv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rk9b8              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 71s   kube-proxy       
	  Normal  Starting                 79s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s   kubelet          Node addons-658933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s   kubelet          Node addons-658933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s   kubelet          Node addons-658933 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           74s   node-controller  Node addons-658933 event: Registered Node addons-658933 in Controller
	  Normal  NodeReady                62s   kubelet          Node addons-658933 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 b2 b9 90 81 64 08 06
	[Nov20 20:15] IPv4: martian source 10.244.0.1 from 10.244.0.33, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 63 0d 94 fc 92 08 06
	[ +23.985095] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 76 17 64 8d 6b 08 06
	[Nov20 20:16] IPv4: martian source 10.244.0.1 from 10.244.0.35, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 2b 13 d1 14 fa 08 06
	[ +24.261769] IPv4: martian source 10.244.0.1 from 10.244.0.38, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce 37 fc 9c c7 60 08 06
	[Nov20 20:18] IPv4: martian source 10.244.0.1 from 10.244.0.44, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fe 57 15 5f d6 3b 08 06
	[Nov20 20:20] IPv4: martian source 10.244.0.1 from 10.244.0.45, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe b0 3d e6 ce 9c 08 06
	[ +25.462033] IPv4: martian source 10.244.0.1 from 10.244.0.46, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 f7 a0 bc a3 9c 08 06
	[Nov20 20:21] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 77 0f c4 68 b7 08 06
	[ +16.517153] IPv4: martian source 10.244.0.1 from 10.244.0.48, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 26 60 e8 4d 7f c5 08 06
	[Nov20 20:22] IPv4: martian source 10.244.0.1 from 10.244.0.49, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e 8b 08 96 d9 af 08 06
	[ +34.202211] IPv4: martian source 10.244.0.1 from 10.244.0.50, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	
	
	==> etcd [6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106] <==
	{"level":"warn","ts":"2025-11-20T20:30:35.800641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.806764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.814383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.833259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.839255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.844989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:35.888100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:46.856132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:30:46.862986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:05.009858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.904076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:31:05.009956Z","caller":"traceutil/trace.go:172","msg":"trace[2046628416] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"132.058384ms","start":"2025-11-20T20:31:04.877882Z","end":"2025-11-20T20:31:05.009940Z","steps":["trace[2046628416] 'range keys from in-memory index tree'  (duration: 131.800514ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:31:10.979455Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.608197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:31:10.979608Z","caller":"traceutil/trace.go:172","msg":"trace[193603763] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:980; }","duration":"101.774585ms","start":"2025-11-20T20:31:10.877818Z","end":"2025-11-20T20:31:10.979593Z","steps":["trace[193603763] 'range keys from in-memory index tree'  (duration: 101.546536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:31:13.308865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:13.317428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:13.331958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:31:13.339924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:31:14.951487Z","caller":"traceutil/trace.go:172","msg":"trace[1999540074] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"106.660318ms","start":"2025-11-20T20:31:14.844806Z","end":"2025-11-20T20:31:14.951466Z","steps":["trace[1999540074] 'process raft request'  (duration: 106.539878ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T20:31:20.377601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.955615ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T20:31:20.377661Z","caller":"traceutil/trace.go:172","msg":"trace[1843906868] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"139.973519ms","start":"2025-11-20T20:31:20.237665Z","end":"2025-11-20T20:31:20.377639Z","steps":["trace[1843906868] 'process raft request'  (duration: 59.331457ms)","trace[1843906868] 'compare'  (duration: 80.534512ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T20:31:20.377678Z","caller":"traceutil/trace.go:172","msg":"trace[193963102] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1071; }","duration":"136.052904ms","start":"2025-11-20T20:31:20.241610Z","end":"2025-11-20T20:31:20.377663Z","steps":["trace[193963102] 'range keys from in-memory index tree'  (duration: 135.903091ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.950973Z","caller":"traceutil/trace.go:172","msg":"trace[1605497090] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"109.578152ms","start":"2025-11-20T20:31:25.841379Z","end":"2025-11-20T20:31:25.950957Z","steps":["trace[1605497090] 'process raft request'  (duration: 109.310128ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.951072Z","caller":"traceutil/trace.go:172","msg":"trace[1563468241] transaction","detail":"{read_only:false; response_revision:1099; number_of_response:1; }","duration":"105.824766ms","start":"2025-11-20T20:31:25.845231Z","end":"2025-11-20T20:31:25.951056Z","steps":["trace[1563468241] 'process raft request'  (duration: 105.693332ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.950986Z","caller":"traceutil/trace.go:172","msg":"trace[809948351] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"108.233852ms","start":"2025-11-20T20:31:25.842744Z","end":"2025-11-20T20:31:25.950978Z","steps":["trace[809948351] 'process raft request'  (duration: 108.110564ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:31:25.950971Z","caller":"traceutil/trace.go:172","msg":"trace[1456414447] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"108.541361ms","start":"2025-11-20T20:31:25.842400Z","end":"2025-11-20T20:31:25.950941Z","steps":["trace[1456414447] 'process raft request'  (duration: 108.386842ms)"],"step_count":1}
	
	
	==> gcp-auth [e079a3716f65a1edf2f2bd82a1da29c254f9b9edfa58fbb8ded0e021c8f48ab8] <==
	2025/11/20 20:31:35 GCP Auth Webhook started!
	2025/11/20 20:31:46 Ready to marshal response ...
	2025/11/20 20:31:46 Ready to write response ...
	2025/11/20 20:31:46 Ready to marshal response ...
	2025/11/20 20:31:46 Ready to write response ...
	2025/11/20 20:31:46 Ready to marshal response ...
	2025/11/20 20:31:46 Ready to write response ...
	
	
	==> kernel <==
	 20:31:57 up  3:14,  0 user,  load average: 2.69, 1.65, 1.16
	Linux addons-658933 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3] <==
	I1120 20:30:45.539763       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:30:45.539797       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:30:45.539822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:30:45.540460       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 20:30:45.540541       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 20:30:45.540620       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 20:30:45.541497       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 20:30:45.542126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 20:30:46.640496       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:30:46.640525       1 metrics.go:72] Registering metrics
	I1120 20:30:46.640613       1 controller.go:711] "Syncing nftables rules"
	I1120 20:30:55.538994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:30:55.539112       1 main.go:301] handling current node
	I1120 20:31:05.538546       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:31:05.538593       1 main.go:301] handling current node
	I1120 20:31:15.538666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:31:15.538705       1 main.go:301] handling current node
	I1120 20:31:25.538858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:31:25.538903       1 main.go:301] handling current node
	I1120 20:31:35.538569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:31:35.538623       1 main.go:301] handling current node
	I1120 20:31:45.539156       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:31:45.539191       1 main.go:301] handling current node
	I1120 20:31:55.538607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:31:55.538637       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e] <==
	 > logger="UnhandledError"
	E1120 20:30:59.047184       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.148.78:443: connect: connection refused" logger="UnhandledError"
	E1120 20:30:59.048695       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.148.78:443: connect: connection refused" logger="UnhandledError"
	W1120 20:31:00.047503       1 handler_proxy.go:99] no RequestInfo found in the context
	W1120 20:31:00.047524       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:31:00.047568       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1120 20:31:00.047592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1120 20:31:00.047668       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1120 20:31:00.048809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1120 20:31:04.064547       1 handler_proxy.go:99] no RequestInfo found in the context
	E1120 20:31:04.064636       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1120 20:31:04.064677       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.148.78:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1120 20:31:04.076718       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1120 20:31:13.308795       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 20:31:13.317336       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1120 20:31:13.331919       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1120 20:31:13.339929       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1120 20:31:55.844411       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44948: use of closed network connection
	E1120 20:31:56.007342       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44976: use of closed network connection
	
	
	==> kube-controller-manager [c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f] <==
	I1120 20:30:43.288013       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:30:43.288020       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 20:30:43.288101       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-658933"
	I1120 20:30:43.288164       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 20:30:43.288329       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 20:30:43.288503       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 20:30:43.288559       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 20:30:43.288644       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:30:43.288664       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 20:30:43.289359       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 20:30:43.289377       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:30:43.289388       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 20:30:43.289426       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 20:30:43.289484       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 20:30:43.290629       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:30:43.291797       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 20:30:43.292638       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:30:43.294729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:30:43.312859       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:30:58.291250       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1120 20:31:13.300936       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1120 20:31:13.300985       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1120 20:31:13.324311       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:31:13.401480       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:31:13.424839       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb] <==
	I1120 20:30:45.272018       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:30:45.560864       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:30:45.667314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:30:45.670429       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 20:30:45.670725       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:30:45.723636       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:30:45.723780       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:30:45.743087       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:30:45.743541       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:30:45.743559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:30:45.745531       1 config.go:200] "Starting service config controller"
	I1120 20:30:45.745594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:30:45.746004       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:30:45.746683       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:30:45.746114       1 config.go:309] "Starting node config controller"
	I1120 20:30:45.746793       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:30:45.746825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:30:45.746408       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:30:45.746868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:30:45.845793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:30:45.852808       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:30:45.854634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037] <==
	E1120 20:30:36.300247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:30:36.300244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:30:36.300293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:30:36.300251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:30:36.300341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:30:36.300344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:30:36.300363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:30:36.300369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:30:36.300382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:30:36.300397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:30:36.300448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:30:36.300490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:30:36.300499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:30:36.300549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:30:36.300585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:30:37.138426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:30:37.183809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:30:37.207208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:30:37.370114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:30:37.398268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:30:37.442341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:30:37.452455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:30:37.492834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:30:37.533088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1120 20:30:39.897865       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:31:25 addons-658933 kubelet[1301]: I1120 20:31:25.839478    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xkkmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:31:26 addons-658933 kubelet[1301]: I1120 20:31:26.842197    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-xkkmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:31:27 addons-658933 kubelet[1301]: E1120 20:31:27.532092    1301 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 20 20:31:27 addons-658933 kubelet[1301]: E1120 20:31:27.532242    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/730db9ac-a022-4b3f-a29d-60b579072144-gcr-creds podName:730db9ac-a022-4b3f-a29d-60b579072144 nodeName:}" failed. No retries permitted until 2025-11-20 20:31:59.532197912 +0000 UTC m=+80.991838452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/730db9ac-a022-4b3f-a29d-60b579072144-gcr-creds") pod "registry-creds-764b6fb674-h47jz" (UID: "730db9ac-a022-4b3f-a29d-60b579072144") : secret "registry-creds-gcr" not found
	Nov 20 20:31:28 addons-658933 kubelet[1301]: I1120 20:31:28.853355    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lq2h5" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:31:28 addons-658933 kubelet[1301]: I1120 20:31:28.863198    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-xkkmp" podStartSLOduration=4.573866082 podStartE2EDuration="33.863175886s" podCreationTimestamp="2025-11-20 20:30:55 +0000 UTC" firstStartedPulling="2025-11-20 20:30:56.097936285 +0000 UTC m=+17.557576802" lastFinishedPulling="2025-11-20 20:31:25.387246071 +0000 UTC m=+46.846886606" observedRunningTime="2025-11-20 20:31:25.953788956 +0000 UTC m=+47.413429516" watchObservedRunningTime="2025-11-20 20:31:28.863175886 +0000 UTC m=+50.322816424"
	Nov 20 20:31:28 addons-658933 kubelet[1301]: I1120 20:31:28.863540    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-lq2h5" podStartSLOduration=2.161289657 podStartE2EDuration="33.863524321s" podCreationTimestamp="2025-11-20 20:30:55 +0000 UTC" firstStartedPulling="2025-11-20 20:30:56.177938266 +0000 UTC m=+17.637578784" lastFinishedPulling="2025-11-20 20:31:27.880172917 +0000 UTC m=+49.339813448" observedRunningTime="2025-11-20 20:31:28.862716724 +0000 UTC m=+50.322357262" watchObservedRunningTime="2025-11-20 20:31:28.863524321 +0000 UTC m=+50.323164860"
	Nov 20 20:31:29 addons-658933 kubelet[1301]: I1120 20:31:29.625882    1301 scope.go:117] "RemoveContainer" containerID="3ccb91b61dbee90cd0b3b7068493cfaa7af1b3dafe176796e44d2d7c77a5921d"
	Nov 20 20:31:29 addons-658933 kubelet[1301]: I1120 20:31:29.863116    1301 scope.go:117] "RemoveContainer" containerID="3ccb91b61dbee90cd0b3b7068493cfaa7af1b3dafe176796e44d2d7c77a5921d"
	Nov 20 20:31:29 addons-658933 kubelet[1301]: I1120 20:31:29.863303    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lq2h5" secret="" err="secret \"gcp-auth\" not found"
	Nov 20 20:31:31 addons-658933 kubelet[1301]: I1120 20:31:31.156954    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5vz\" (UniqueName: \"kubernetes.io/projected/c689095a-1f57-4b45-92df-4d87d15827c4-kube-api-access-zs5vz\") pod \"c689095a-1f57-4b45-92df-4d87d15827c4\" (UID: \"c689095a-1f57-4b45-92df-4d87d15827c4\") "
	Nov 20 20:31:31 addons-658933 kubelet[1301]: I1120 20:31:31.159494    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c689095a-1f57-4b45-92df-4d87d15827c4-kube-api-access-zs5vz" (OuterVolumeSpecName: "kube-api-access-zs5vz") pod "c689095a-1f57-4b45-92df-4d87d15827c4" (UID: "c689095a-1f57-4b45-92df-4d87d15827c4"). InnerVolumeSpecName "kube-api-access-zs5vz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 20 20:31:31 addons-658933 kubelet[1301]: I1120 20:31:31.257906    1301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zs5vz\" (UniqueName: \"kubernetes.io/projected/c689095a-1f57-4b45-92df-4d87d15827c4-kube-api-access-zs5vz\") on node \"addons-658933\" DevicePath \"\""
	Nov 20 20:31:31 addons-658933 kubelet[1301]: I1120 20:31:31.872760    1301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c72cf21d6f795aa94954755bc775c3e0120c99543255df7828408e56a2f831"
	Nov 20 20:31:31 addons-658933 kubelet[1301]: I1120 20:31:31.895752    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-g5x6v" podStartSLOduration=20.508026699 podStartE2EDuration="46.895727198s" podCreationTimestamp="2025-11-20 20:30:45 +0000 UTC" firstStartedPulling="2025-11-20 20:31:04.832316921 +0000 UTC m=+26.291957438" lastFinishedPulling="2025-11-20 20:31:31.2200174 +0000 UTC m=+52.679657937" observedRunningTime="2025-11-20 20:31:31.895588174 +0000 UTC m=+53.355228714" watchObservedRunningTime="2025-11-20 20:31:31.895727198 +0000 UTC m=+53.355367738"
	Nov 20 20:31:35 addons-658933 kubelet[1301]: I1120 20:31:35.915773    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-vprfm" podStartSLOduration=36.379666821 podStartE2EDuration="43.915750697s" podCreationTimestamp="2025-11-20 20:30:52 +0000 UTC" firstStartedPulling="2025-11-20 20:31:27.872562145 +0000 UTC m=+49.332202665" lastFinishedPulling="2025-11-20 20:31:35.408646022 +0000 UTC m=+56.868286541" observedRunningTime="2025-11-20 20:31:35.913892436 +0000 UTC m=+57.373532973" watchObservedRunningTime="2025-11-20 20:31:35.915750697 +0000 UTC m=+57.375391235"
	Nov 20 20:31:39 addons-658933 kubelet[1301]: I1120 20:31:39.922519    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-dsc49" podStartSLOduration=43.645152177 podStartE2EDuration="54.922496296s" podCreationTimestamp="2025-11-20 20:30:45 +0000 UTC" firstStartedPulling="2025-11-20 20:31:27.879175537 +0000 UTC m=+49.338816065" lastFinishedPulling="2025-11-20 20:31:39.156519646 +0000 UTC m=+60.616160184" observedRunningTime="2025-11-20 20:31:39.921118479 +0000 UTC m=+61.380759022" watchObservedRunningTime="2025-11-20 20:31:39.922496296 +0000 UTC m=+61.382136833"
	Nov 20 20:31:41 addons-658933 kubelet[1301]: I1120 20:31:41.670681    1301 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 20 20:31:41 addons-658933 kubelet[1301]: I1120 20:31:41.670723    1301 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 20 20:31:43 addons-658933 kubelet[1301]: I1120 20:31:43.950389    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-z7dj2" podStartSLOduration=2.041892518 podStartE2EDuration="48.950368187s" podCreationTimestamp="2025-11-20 20:30:55 +0000 UTC" firstStartedPulling="2025-11-20 20:30:56.096822968 +0000 UTC m=+17.556463501" lastFinishedPulling="2025-11-20 20:31:43.005298643 +0000 UTC m=+64.464939170" observedRunningTime="2025-11-20 20:31:43.949912603 +0000 UTC m=+65.409553133" watchObservedRunningTime="2025-11-20 20:31:43.950368187 +0000 UTC m=+65.410008724"
	Nov 20 20:31:46 addons-658933 kubelet[1301]: I1120 20:31:46.782126    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhzt6\" (UniqueName: \"kubernetes.io/projected/28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4-kube-api-access-vhzt6\") pod \"busybox\" (UID: \"28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4\") " pod="default/busybox"
	Nov 20 20:31:46 addons-658933 kubelet[1301]: I1120 20:31:46.782205    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4-gcp-creds\") pod \"busybox\" (UID: \"28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4\") " pod="default/busybox"
	Nov 20 20:31:49 addons-658933 kubelet[1301]: I1120 20:31:49.974619    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.892905691 podStartE2EDuration="3.97459583s" podCreationTimestamp="2025-11-20 20:31:46 +0000 UTC" firstStartedPulling="2025-11-20 20:31:47.022639299 +0000 UTC m=+68.482279821" lastFinishedPulling="2025-11-20 20:31:49.104329442 +0000 UTC m=+70.563969960" observedRunningTime="2025-11-20 20:31:49.973906178 +0000 UTC m=+71.433546715" watchObservedRunningTime="2025-11-20 20:31:49.97459583 +0000 UTC m=+71.434236370"
	Nov 20 20:31:52 addons-658933 kubelet[1301]: I1120 20:31:52.628967    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089f9c8a-8929-46b9-8bb4-1f6b51d11e06" path="/var/lib/kubelet/pods/089f9c8a-8929-46b9-8bb4-1f6b51d11e06/volumes"
	Nov 20 20:31:56 addons-658933 kubelet[1301]: E1120 20:31:56.007325    1301 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51796->127.0.0.1:33905: write tcp 127.0.0.1:51796->127.0.0.1:33905: write: broken pipe
	
	
	==> storage-provisioner [b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b] <==
	W1120 20:31:32.483603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:34.487162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:34.492089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:36.495963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:36.513049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:38.517255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:38.521985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:40.524684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:40.528083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:42.531835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:42.536672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:44.540161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:44.549426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:46.553254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:46.557228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:48.560908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:48.564842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:50.567647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:50.571357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:52.574316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:52.578436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:54.582206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:54.587741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:56.591036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:31:56.595939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-658933 -n addons-658933
helpers_test.go:269: (dbg) Run:  kubectl --context addons-658933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-nn66k ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh registry-creds-764b6fb674-h47jz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-658933 describe pod gcp-auth-certs-patch-nn66k ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh registry-creds-764b6fb674-h47jz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-658933 describe pod gcp-auth-certs-patch-nn66k ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh registry-creds-764b6fb674-h47jz: exit status 1 (60.486648ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-nn66k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-lwnhv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b4csh" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-h47jz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-658933 describe pod gcp-auth-certs-patch-nn66k ingress-nginx-admission-create-lwnhv ingress-nginx-admission-patch-b4csh registry-creds-764b6fb674-h47jz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable headlamp --alsologtostderr -v=1: exit status 11 (251.689877ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:31:58.618956  264476 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:31:58.619241  264476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:58.619253  264476 out.go:374] Setting ErrFile to fd 2...
	I1120 20:31:58.619257  264476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:31:58.619461  264476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:31:58.619715  264476 mustload.go:66] Loading cluster: addons-658933
	I1120 20:31:58.620059  264476 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:31:58.620072  264476 addons.go:607] checking whether the cluster is paused
	I1120 20:31:58.620150  264476 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:31:58.620161  264476 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:31:58.620545  264476 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:31:58.640450  264476 ssh_runner.go:195] Run: systemctl --version
	I1120 20:31:58.640500  264476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:31:58.659652  264476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:31:58.756279  264476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:31:58.756377  264476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:31:58.786998  264476 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:31:58.787025  264476 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:31:58.787029  264476 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:31:58.787032  264476 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:31:58.787035  264476 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:31:58.787038  264476 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:31:58.787040  264476 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:31:58.787043  264476 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:31:58.787045  264476 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:31:58.787060  264476 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:31:58.787064  264476 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:31:58.787068  264476 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:31:58.787072  264476 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:31:58.787078  264476 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:31:58.787082  264476 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:31:58.787092  264476 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:31:58.787099  264476 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:31:58.787104  264476 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:31:58.787107  264476 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:31:58.787109  264476 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:31:58.787111  264476 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:31:58.787113  264476 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:31:58.787116  264476 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:31:58.787118  264476 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:31:58.787126  264476 cri.go:89] found id: ""
	I1120 20:31:58.787178  264476 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:31:58.802080  264476 out.go:203] 
	W1120 20:31:58.803293  264476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:31:58.803318  264476 out.go:285] * 
	* 
	W1120 20:31:58.807362  264476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:31:58.808612  264476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-j7pgx" [57e0e35e-37c4-4617-9336-df18871dcad5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003409234s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (267.119928ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:06.657235  264915 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:06.657398  264915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:06.657412  264915 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:06.657418  264915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:06.657714  264915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:06.658036  264915 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:06.658434  264915 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:06.658455  264915 addons.go:607] checking whether the cluster is paused
	I1120 20:32:06.658543  264915 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:06.658556  264915 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:06.658931  264915 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:06.678107  264915 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:06.678159  264915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:06.698155  264915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:06.795086  264915 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:06.795177  264915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:06.829392  264915 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:06.829426  264915 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:06.829432  264915 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:06.829437  264915 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:06.829441  264915 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:06.829447  264915 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:06.829451  264915 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:06.829454  264915 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:06.829458  264915 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:06.829473  264915 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:06.829477  264915 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:06.829482  264915 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:06.829487  264915 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:06.829491  264915 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:06.829497  264915 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:06.829520  264915 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:06.829524  264915 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:06.829529  264915 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:06.829533  264915 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:06.829536  264915 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:06.829543  264915 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:06.829546  264915 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:06.829550  264915 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:06.829553  264915 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:06.829556  264915 cri.go:89] found id: ""
	I1120 20:32:06.829616  264915 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:06.845723  264915 out.go:203] 
	W1120 20:32:06.848010  264915 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:06.848035  264915 out.go:285] * 
	* 
	W1120 20:32:06.852459  264915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:06.853904  264915 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-658933 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-658933 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6fd67aa3-8b20-4511-bc2a-18b48d4ebe45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6fd67aa3-8b20-4511-bc2a-18b48d4ebe45] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6fd67aa3-8b20-4511-bc2a-18b48d4ebe45] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003841483s
addons_test.go:967: (dbg) Run:  kubectl --context addons-658933 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 ssh "cat /opt/local-path-provisioner/pvc-d5c5cd9e-0905-49d9-bb13-e35668184aec_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-658933 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-658933 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (328.457936ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:15.013403  266468 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:15.013519  266468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:15.013525  266468 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:15.013530  266468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:15.013842  266468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:15.014205  266468 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:15.014920  266468 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:15.014944  266468 addons.go:607] checking whether the cluster is paused
	I1120 20:32:15.015167  266468 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:15.015194  266468 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:15.015813  266468 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:15.042383  266468 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:15.042477  266468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:15.070866  266468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:15.179157  266468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:15.179264  266468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:15.226305  266468 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:15.226344  266468 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:15.226350  266468 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:15.226355  266468 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:15.226358  266468 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:15.226365  266468 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:15.226369  266468 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:15.226373  266468 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:15.226377  266468 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:15.226390  266468 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:15.226394  266468 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:15.226399  266468 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:15.226402  266468 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:15.226407  266468 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:15.226411  266468 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:15.226426  266468 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:15.226432  266468 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:15.226438  266468 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:15.226442  266468 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:15.226446  266468 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:15.226453  266468 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:15.226457  266468 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:15.226461  266468 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:15.226465  266468 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:15.226468  266468 cri.go:89] found id: ""
	I1120 20:32:15.226523  266468 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:15.248605  266468 out.go:203] 
	W1120 20:32:15.249840  266468 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:15.249860  266468 out.go:285] * 
	* 
	W1120 20:32:15.256769  266468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:15.258150  266468 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-xkkmp" [4502d18f-672b-4f3a-8bda-dae9ab852e38] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003509202s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (258.259442ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:01.328590  264541 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:01.328766  264541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:01.328781  264541 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:01.328787  264541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:01.329133  264541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:01.329525  264541 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:01.330007  264541 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:01.330030  264541 addons.go:607] checking whether the cluster is paused
	I1120 20:32:01.330123  264541 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:01.330136  264541 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:01.330573  264541 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:01.350371  264541 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:01.350420  264541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:01.370487  264541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:01.467755  264541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:01.467840  264541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:01.499364  264541 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:01.499386  264541 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:01.499390  264541 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:01.499393  264541 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:01.499395  264541 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:01.499399  264541 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:01.499401  264541 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:01.499404  264541 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:01.499406  264541 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:01.499411  264541 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:01.499414  264541 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:01.499416  264541 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:01.499418  264541 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:01.499421  264541 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:01.499423  264541 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:01.499429  264541 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:01.499432  264541 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:01.499437  264541 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:01.499439  264541 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:01.499441  264541 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:01.499446  264541 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:01.499448  264541 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:01.499452  264541 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:01.499456  264541 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:01.499459  264541 cri.go:89] found id: ""
	I1120 20:32:01.499496  264541 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:01.514775  264541 out.go:203] 
	W1120 20:32:01.516011  264541 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:01.516032  264541 out.go:285] * 
	* 
	W1120 20:32:01.520492  264541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:01.521874  264541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-rk9b8" [cdcee528-af26-4686-8982-d4ac6d47a3b1] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004028681s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable yakd --alsologtostderr -v=1: exit status 11 (261.988025ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:06.587491  264902 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:06.587811  264902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:06.587823  264902 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:06.587828  264902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:06.588009  264902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:06.588336  264902 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:06.588859  264902 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:06.588882  264902 addons.go:607] checking whether the cluster is paused
	I1120 20:32:06.588977  264902 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:06.588991  264902 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:06.589398  264902 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:06.610734  264902 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:06.610799  264902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:06.630414  264902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:06.731497  264902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:06.731626  264902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:06.765312  264902 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:06.765334  264902 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:06.765338  264902 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:06.765341  264902 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:06.765344  264902 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:06.765348  264902 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:06.765350  264902 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:06.765353  264902 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:06.765356  264902 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:06.765363  264902 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:06.765366  264902 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:06.765369  264902 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:06.765371  264902 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:06.765373  264902 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:06.765376  264902 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:06.765387  264902 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:06.765390  264902 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:06.765395  264902 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:06.765398  264902 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:06.765400  264902 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:06.765403  264902 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:06.765405  264902 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:06.765407  264902 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:06.765410  264902 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:06.765412  264902 cri.go:89] found id: ""
	I1120 20:32:06.765452  264902 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:06.780283  264902 out.go:203] 
	W1120 20:32:06.781504  264902 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:06.781523  264902 out.go:285] * 
	* 
	W1120 20:32:06.786487  264902 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:06.787971  264902 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-vm8jx" [6c5213f4-3f28-458c-9ed1-ea0bdddd929b] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003870772s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-658933 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-658933 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (250.384462ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:32:03.875726  264716 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:32:03.876072  264716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:03.876085  264716 out.go:374] Setting ErrFile to fd 2...
	I1120 20:32:03.876090  264716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:32:03.876362  264716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:32:03.876728  264716 mustload.go:66] Loading cluster: addons-658933
	I1120 20:32:03.877146  264716 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:03.877167  264716 addons.go:607] checking whether the cluster is paused
	I1120 20:32:03.877296  264716 config.go:182] Loaded profile config "addons-658933": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:32:03.877317  264716 host.go:66] Checking if "addons-658933" exists ...
	I1120 20:32:03.877801  264716 cli_runner.go:164] Run: docker container inspect addons-658933 --format={{.State.Status}}
	I1120 20:32:03.896098  264716 ssh_runner.go:195] Run: systemctl --version
	I1120 20:32:03.896154  264716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-658933
	I1120 20:32:03.914373  264716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/addons-658933/id_rsa Username:docker}
	I1120 20:32:04.010104  264716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:32:04.010237  264716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:32:04.041317  264716 cri.go:89] found id: "0869a7f04bf4e3a4da1f9699827920a7fd99ff57b6501de8c29af446743691c1"
	I1120 20:32:04.041345  264716 cri.go:89] found id: "c00315bedde6da7961b4ad7d9984321aa031c858aa041b2192f06a44eafd491a"
	I1120 20:32:04.041352  264716 cri.go:89] found id: "3dc9d9f32ffaaa5549d2c441fd25b0607373f493f999d66c722eab042f61ed8d"
	I1120 20:32:04.041356  264716 cri.go:89] found id: "e84ec310b2afdcf0db71e646299841a180d3068d9a971fb08aaab40ab46f997d"
	I1120 20:32:04.041359  264716 cri.go:89] found id: "564d810ba191a43a03a1955c89bf486ff129b704a8848b594b12b3086c6a9e3a"
	I1120 20:32:04.041362  264716 cri.go:89] found id: "ca550c41b2a773923cbe02b9f89f900889407cd2ab40122a4a8227935155a70b"
	I1120 20:32:04.041365  264716 cri.go:89] found id: "6362678378ad4a1db2917607159c2edbd3f332a1f14fc94d7edb030a20557215"
	I1120 20:32:04.041368  264716 cri.go:89] found id: "cf2ac7eff8739b1748d5e3f57123aa2fabf98e62b6c00b16fc20bf6fed0ef7a7"
	I1120 20:32:04.041370  264716 cri.go:89] found id: "9224e8f92f1b15f5557a7787710a36049ae78e43f0eb7dd1dbdc95ea923c3ad9"
	I1120 20:32:04.041379  264716 cri.go:89] found id: "adac6d2c858f24a2e6c21b8123e36468d2361f3533e830374cca2a5257998118"
	I1120 20:32:04.041382  264716 cri.go:89] found id: "afe1aac38b026907c09f9f8ffbad7844380beb67980b9981e64f95e477c5d2f5"
	I1120 20:32:04.041385  264716 cri.go:89] found id: "a3370293507b78d5834e26604448b518cab8292fe2b5c8beaf90595e116d6041"
	I1120 20:32:04.041387  264716 cri.go:89] found id: "cfe3cfa35ee4e1efad3e7557bd05803e0780fcb736787c08f14a4923ab703d0a"
	I1120 20:32:04.041390  264716 cri.go:89] found id: "6d168b8373fd1df3c551b64f098b8cfc5b239f898fa443ce5c932778f8cba77a"
	I1120 20:32:04.041393  264716 cri.go:89] found id: "18df77ead4cf8ab401ffc8aea738492bc865261fa24ed93ff2c4757166b5ae2c"
	I1120 20:32:04.041406  264716 cri.go:89] found id: "2dc54febfd2877f0fa62e0480b73e24824de09e51144a3eaf1114e063b9966fa"
	I1120 20:32:04.041416  264716 cri.go:89] found id: "c812b6447964f4e5b452c0a610b56967d9df9788859952dcef981a17ea6a81f7"
	I1120 20:32:04.041424  264716 cri.go:89] found id: "b9c2a6d4679fd0c160443b031db6967984237bb8a8c8afa58cb5af353ef03e9b"
	I1120 20:32:04.041429  264716 cri.go:89] found id: "2f3f9b31aedbb07eb04280900fb8854e9730670c418079c95b60a57987a74feb"
	I1120 20:32:04.041433  264716 cri.go:89] found id: "cb4964e2e68f95c7cc2755223a510569e73355f5c03153a0deeebb2b21c90dc3"
	I1120 20:32:04.041438  264716 cri.go:89] found id: "c51ec37256def6c0262b60bae44a3223fddadb0406f5f2c93abd421f468b310f"
	I1120 20:32:04.041442  264716 cri.go:89] found id: "b69462e9ce88e37e2184885c9c11a86bfb6300cc5c9e44a3e840da5520674037"
	I1120 20:32:04.041446  264716 cri.go:89] found id: "6d905baa8985ba6eff2648db31c4b19e3c80fde806af85066531404a52d19106"
	I1120 20:32:04.041450  264716 cri.go:89] found id: "4038552b2ad493b2345e10ee9c2e3a931297e3c2ab8e9141cf7a40188b32454e"
	I1120 20:32:04.041452  264716 cri.go:89] found id: ""
	I1120 20:32:04.041493  264716 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 20:32:04.056933  264716 out.go:203] 
	W1120 20:32:04.058286  264716 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 20:32:04.058315  264716 out.go:285] * 
	* 
	W1120 20:32:04.062649  264716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 20:32:04.063895  264716 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-658933 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-041399 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-041399 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-l4l2w" [f6756e03-3dc9-48a1-b448-53a215d9e89c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-041399 -n functional-041399
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-20 20:47:57.274152721 +0000 UTC m=+1102.574172847
functional_test.go:1645: (dbg) Run:  kubectl --context functional-041399 describe po hello-node-connect-7d85dfc575-l4l2w -n default
functional_test.go:1645: (dbg) kubectl --context functional-041399 describe po hello-node-connect-7d85dfc575-l4l2w -n default:
Name:             hello-node-connect-7d85dfc575-l4l2w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-041399/192.168.49.2
Start Time:       Thu, 20 Nov 2025 20:37:56 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n84tn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-n84tn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-l4l2w to functional-041399
Normal   Pulling    6m55s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m45s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m45s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-041399 logs hello-node-connect-7d85dfc575-l4l2w -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-041399 logs hello-node-connect-7d85dfc575-l4l2w -n default: exit status 1 (63.848668ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-l4l2w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-041399 logs hello-node-connect-7d85dfc575-l4l2w -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-041399 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-l4l2w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-041399/192.168.49.2
Start Time:       Thu, 20 Nov 2025 20:37:56 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n84tn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-n84tn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-l4l2w to functional-041399
Normal   Pulling    6m55s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m45s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m45s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-041399 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-041399 logs -l app=hello-node-connect: exit status 1 (97.559977ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-l4l2w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-041399 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-041399 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.241.208
IPs:                      10.102.241.208
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32001/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-041399
helpers_test.go:243: (dbg) docker inspect functional-041399:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d",
	        "Created": "2025-11-20T20:36:00.551942113Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277772,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:36:00.586727964Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d/hosts",
	        "LogPath": "/var/lib/docker/containers/8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d/8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d-json.log",
	        "Name": "/functional-041399",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-041399:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-041399",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e36aa4a5e8c666916497eea2964e6444e03180dc172814b25439759f0357e6d",
	                "LowerDir": "/var/lib/docker/overlay2/8ef10219e2dcfa778fc1cd41c9fb19893aac04726ea0424e959b06a0c1e80c65-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ef10219e2dcfa778fc1cd41c9fb19893aac04726ea0424e959b06a0c1e80c65/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ef10219e2dcfa778fc1cd41c9fb19893aac04726ea0424e959b06a0c1e80c65/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ef10219e2dcfa778fc1cd41c9fb19893aac04726ea0424e959b06a0c1e80c65/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-041399",
	                "Source": "/var/lib/docker/volumes/functional-041399/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-041399",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-041399",
	                "name.minikube.sigs.k8s.io": "functional-041399",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0d3e949ba4bfeb24df576afaef873824cb96c9ab1a52cc639cb571022b4e11fd",
	            "SandboxKey": "/var/run/docker/netns/0d3e949ba4bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-041399": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "278f4f029564aa336fe813d55179632bbace5fe2abdcf5e380c7f83010c5a991",
	                    "EndpointID": "05e37bed38dbf0070db3303be04f3952ebb6bf44d468431d4d3b345318784d33",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "7e:af:ba:f5:c3:1e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-041399",
	                        "8e36aa4a5e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-041399 -n functional-041399
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 logs -n 25: (1.310605061s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-041399 ssh findmnt -T /mount1                                                                           │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ mount          │ -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount1 --alsologtostderr -v=1 │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ ssh            │ functional-041399 ssh findmnt -T /mount1                                                                           │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ ssh            │ functional-041399 ssh findmnt -T /mount2                                                                           │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ ssh            │ functional-041399 ssh findmnt -T /mount3                                                                           │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ mount          │ -p functional-041399 --kill=true                                                                                   │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ start          │ -p functional-041399 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ start          │ -p functional-041399 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ start          │ -p functional-041399 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-041399 --alsologtostderr -v=1                                                     │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ update-context │ functional-041399 update-context --alsologtostderr -v=2                                                            │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ update-context │ functional-041399 update-context --alsologtostderr -v=2                                                            │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ update-context │ functional-041399 update-context --alsologtostderr -v=2                                                            │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ image          │ functional-041399 image ls --format short --alsologtostderr                                                        │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ image          │ functional-041399 image ls --format yaml --alsologtostderr                                                         │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ ssh            │ functional-041399 ssh pgrep buildkitd                                                                              │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │                     │
	│ image          │ functional-041399 image build -t localhost/my-image:functional-041399 testdata/build --alsologtostderr             │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ image          │ functional-041399 image ls --format json --alsologtostderr                                                         │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ image          │ functional-041399 image ls --format table --alsologtostderr                                                        │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ image          │ functional-041399 image ls                                                                                         │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:38 UTC │ 20 Nov 25 20:38 UTC │
	│ service        │ functional-041399 service list                                                                                     │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:47 UTC │ 20 Nov 25 20:47 UTC │
	│ service        │ functional-041399 service list -o json                                                                             │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:47 UTC │ 20 Nov 25 20:47 UTC │
	│ service        │ functional-041399 service --namespace=default --https --url hello-node                                             │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:47 UTC │                     │
	│ service        │ functional-041399 service hello-node --url --format={{.IP}}                                                        │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:47 UTC │                     │
	│ service        │ functional-041399 service hello-node --url                                                                         │ functional-041399 │ jenkins │ v1.37.0 │ 20 Nov 25 20:47 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:38:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:38:11.855961  293055 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:38:11.856241  293055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:38:11.856252  293055 out.go:374] Setting ErrFile to fd 2...
	I1120 20:38:11.856256  293055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:38:11.856576  293055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:38:11.857035  293055 out.go:368] Setting JSON to false
	I1120 20:38:11.857961  293055 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12034,"bootTime":1763659058,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:38:11.858065  293055 start.go:143] virtualization: kvm guest
	I1120 20:38:11.859952  293055 out.go:179] * [functional-041399] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1120 20:38:11.861323  293055 notify.go:221] Checking for updates...
	I1120 20:38:11.861347  293055 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:38:11.862618  293055 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:38:11.864251  293055 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:38:11.865525  293055 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:38:11.866889  293055 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:38:11.868252  293055 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:38:11.869944  293055 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:38:11.870490  293055 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:38:11.895674  293055 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:38:11.895851  293055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:38:11.958996  293055 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-20 20:38:11.94832152 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:38:11.959097  293055 docker.go:319] overlay module found
	I1120 20:38:11.965194  293055 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1120 20:38:11.966444  293055 start.go:309] selected driver: docker
	I1120 20:38:11.966459  293055 start.go:930] validating driver "docker" against &{Name:functional-041399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-041399 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:38:11.966536  293055 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:38:11.968144  293055 out.go:203] 
	W1120 20:38:11.969275  293055 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 20:38:11.970401  293055 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 20:38:15 functional-041399 crio[3619]: time="2025-11-20T20:38:15.517477323Z" level=info msg="Stopping pod sandbox: be62d4820fc0be780dbbc76736c50bc132e731ef1adb5d7ce1ef1f0f6d7abd7f" id=9360c044-d61b-4bc7-b8f2-22b4e6e5f7e0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 20:38:15 functional-041399 crio[3619]: time="2025-11-20T20:38:15.517523507Z" level=info msg="Stopped pod sandbox (already stopped): be62d4820fc0be780dbbc76736c50bc132e731ef1adb5d7ce1ef1f0f6d7abd7f" id=9360c044-d61b-4bc7-b8f2-22b4e6e5f7e0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 20 20:38:15 functional-041399 crio[3619]: time="2025-11-20T20:38:15.517880006Z" level=info msg="Removing pod sandbox: be62d4820fc0be780dbbc76736c50bc132e731ef1adb5d7ce1ef1f0f6d7abd7f" id=4705e56b-a31a-46f0-a9b7-5764942047a1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 20 20:38:15 functional-041399 crio[3619]: time="2025-11-20T20:38:15.520069994Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 20:38:15 functional-041399 crio[3619]: time="2025-11-20T20:38:15.520119752Z" level=info msg="Removed pod sandbox: be62d4820fc0be780dbbc76736c50bc132e731ef1adb5d7ce1ef1f0f6d7abd7f" id=4705e56b-a31a-46f0-a9b7-5764942047a1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.985974516Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=4458c696-cf31-4a81-8b68-c799db822306 name=/runtime.v1.ImageService/PullImage
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.986711331Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=cb10d578-fc47-46e5-aeb6-70569a264000 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.988510437Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4e9600ae-b82c-48cc-9a08-553267fa7a92 name=/runtime.v1.ImageService/PullImage
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.989038988Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=30ff809a-a0b1-4195-b6dd-0ecaa1b45cad name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.993894377Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wdxvn/kubernetes-dashboard" id=31b176c2-2caf-4ba5-a0c7-df33f7749c97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.994033545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.998765681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.998987728Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/08882ad6d997d7cf1fe748df1ed63bf0439d1d1a6c1caf19f6ffb8fa37ca7aba/merged/etc/group: no such file or directory"
	Nov 20 20:38:18 functional-041399 crio[3619]: time="2025-11-20T20:38:18.99942692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:38:19 functional-041399 crio[3619]: time="2025-11-20T20:38:19.03446311Z" level=info msg="Created container 3d9353c518be3605065e69df711ac7cf61a9a3108b625b9c7c94f2f9d6e27d4f: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wdxvn/kubernetes-dashboard" id=31b176c2-2caf-4ba5-a0c7-df33f7749c97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:38:19 functional-041399 crio[3619]: time="2025-11-20T20:38:19.035140345Z" level=info msg="Starting container: 3d9353c518be3605065e69df711ac7cf61a9a3108b625b9c7c94f2f9d6e27d4f" id=14701e7c-54fb-4e95-b911-28d7da54d1db name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 20:38:19 functional-041399 crio[3619]: time="2025-11-20T20:38:19.037375597Z" level=info msg="Started container" PID=7678 containerID=3d9353c518be3605065e69df711ac7cf61a9a3108b625b9c7c94f2f9d6e27d4f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wdxvn/kubernetes-dashboard id=14701e7c-54fb-4e95-b911-28d7da54d1db name=/runtime.v1.RuntimeService/StartContainer sandboxID=6898ee373b0e172a2dac14a84b24aa280ec08c4c37ab54b10b6ff626263cc462
	Nov 20 20:38:31 functional-041399 crio[3619]: time="2025-11-20T20:38:31.523328016Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b4e4f0df-0d8f-4778-9f2f-6a0fe4abbd4a name=/runtime.v1.ImageService/PullImage
	Nov 20 20:38:46 functional-041399 crio[3619]: time="2025-11-20T20:38:46.523482146Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=47aa6488-78af-4fac-a460-ba6a46c5e1ec name=/runtime.v1.ImageService/PullImage
	Nov 20 20:39:15 functional-041399 crio[3619]: time="2025-11-20T20:39:15.523358162Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=53c7b831-e14d-4771-84e9-6f4b693e09fa name=/runtime.v1.ImageService/PullImage
	Nov 20 20:39:37 functional-041399 crio[3619]: time="2025-11-20T20:39:37.523004343Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7885da98-15c1-479a-a299-7583cc6ce1ab name=/runtime.v1.ImageService/PullImage
	Nov 20 20:40:37 functional-041399 crio[3619]: time="2025-11-20T20:40:37.523236272Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=978bea6c-18a7-4c9a-af73-e585bfafc03c name=/runtime.v1.ImageService/PullImage
	Nov 20 20:41:02 functional-041399 crio[3619]: time="2025-11-20T20:41:02.523342424Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=931517b1-5eeb-4300-87d1-67b82aedc466 name=/runtime.v1.ImageService/PullImage
	Nov 20 20:43:29 functional-041399 crio[3619]: time="2025-11-20T20:43:29.523132709Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=de120d41-e593-405c-924c-d073dc37c904 name=/runtime.v1.ImageService/PullImage
	Nov 20 20:43:52 functional-041399 crio[3619]: time="2025-11-20T20:43:52.523136831Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=08ab8129-d36f-431c-a55e-f262eff75ebd name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3d9353c518be3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   6898ee373b0e1       kubernetes-dashboard-855c9754f9-wdxvn        kubernetes-dashboard
	809fdec0e145c       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   7b8bed8fd8f61       dashboard-metrics-scraper-77bf4d6c4c-wb6ns   kubernetes-dashboard
	ef72762f7af23       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   459658af639ad       sp-pod                                       default
	b8db860dc34e4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   22ae3309e6b48       busybox-mount                                default
	f7446d5d6fae2       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   d6aa41dae8384       nginx-svc                                    default
	c4fe1ade6c61e       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   381fff5ef6738       mysql-5bb876957f-r5gnk                       default
	5e18ae4a3559b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   2a46f57b1dff3       kube-controller-manager-functional-041399    kube-system
	c94424bbc390d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   5eedac86b0861       kube-apiserver-functional-041399             kube-system
	b6a35456ae9bb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   54e54a8562962       etcd-functional-041399                       kube-system
	58f669327cb94       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   2a46f57b1dff3       kube-controller-manager-functional-041399    kube-system
	a672941a090b5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   d3b9a9653fb10       kube-scheduler-functional-041399             kube-system
	14276181884b9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   71af083745934       kindnet-7q7lg                                kube-system
	fe0a9e1d158d4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   94816aafeacf2       kube-proxy-dbhwm                             kube-system
	8b63e88671cb3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   8fd0a12528cac       coredns-66bc5c9577-27n5f                     kube-system
	44cd661ba8d5e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   1c877004ead90       storage-provisioner                          kube-system
	0d339095da7e0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   8fd0a12528cac       coredns-66bc5c9577-27n5f                     kube-system
	716a8a4029329       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   1c877004ead90       storage-provisioner                          kube-system
	54a53c308d5ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   94816aafeacf2       kube-proxy-dbhwm                             kube-system
	b07ba8ecdacce       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   71af083745934       kindnet-7q7lg                                kube-system
	22e5a51176fe5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   d3b9a9653fb10       kube-scheduler-functional-041399             kube-system
	b995be2f906b1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   54e54a8562962       etcd-functional-041399                       kube-system
	
	
	==> coredns [0d339095da7e0d6dd9cc4d9861668fba5bb60cd077f12abf05a43329901b871e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34201 - 8144 "HINFO IN 3604965976977697445.3149283636116106888. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.158900104s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8b63e88671cb363501d499e98da00b1aa66cb34a44f4f99c0b0d08d62f5cbda6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40953 - 15308 "HINFO IN 6896354637615438751.4006683530467566455. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.071256004s
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-041399
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-041399
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=functional-041399
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_36_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:36:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-041399
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:47:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:45:17 +0000   Thu, 20 Nov 2025 20:36:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:45:17 +0000   Thu, 20 Nov 2025 20:36:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:45:17 +0000   Thu, 20 Nov 2025 20:36:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:45:17 +0000   Thu, 20 Nov 2025 20:36:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-041399
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                a5fa2a42-0881-4511-a894-427701408557
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bd8kz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-l4l2w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-r5gnk                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-27n5f                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-041399                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-7q7lg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-041399              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-041399     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-dbhwm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-041399              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wb6ns    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wdxvn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-041399 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-041399 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-041399 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-041399 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-041399 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-041399 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-041399 event: Registered Node functional-041399 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-041399 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-041399 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-041399 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-041399 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-041399 event: Registered Node functional-041399 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	[Nov20 20:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.053479] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023936] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +2.047762] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +4.031673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +8.127416] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[ +16.382740] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 20:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	
	
	==> etcd [b6a35456ae9bbb0bf60b57e943bf0236db4aeaf4b8de6ece62696aba6fd23fa8] <==
	{"level":"warn","ts":"2025-11-20T20:37:16.971459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:16.978957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:16.985850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:16.993033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:16.999646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.015470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.023553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.029967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.036960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.044006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.050009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.057731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.064370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.070347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.076542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.083921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.089972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.104996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.111502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.117732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:37:17.174315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50682","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:37:48.783033Z","caller":"traceutil/trace.go:172","msg":"trace[1413111242] transaction","detail":"{read_only:false; response_revision:677; number_of_response:1; }","duration":"110.965151ms","start":"2025-11-20T20:37:48.672052Z","end":"2025-11-20T20:37:48.783018Z","steps":["trace[1413111242] 'process raft request'  (duration: 110.869937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T20:47:16.675837Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1163}
	{"level":"info","ts":"2025-11-20T20:47:16.695558Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1163,"took":"19.380601ms","hash":1196729049,"current-db-size-bytes":3518464,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-20T20:47:16.695631Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1196729049,"revision":1163,"compact-revision":-1}
	
	
	==> etcd [b995be2f906b154f4b381c646fcdb213aa9ad1eba1b1c67429720cc6491a5216] <==
	{"level":"warn","ts":"2025-11-20T20:36:14.394354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:36:14.400254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:36:14.406129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:36:14.418666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:36:14.424661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:36:14.430657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:36:14.476583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47318","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:37:14.259623Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-20T20:37:14.259734Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-041399","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-20T20:37:14.259823Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T20:37:14.261391Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-20T20:37:14.261457Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T20:37:14.261498Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-20T20:37:14.261555Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-20T20:37:14.261549Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-20T20:37:14.261563Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T20:37:14.261635Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T20:37:14.261648Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-20T20:37:14.261608Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-20T20:37:14.261688Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-20T20:37:14.261720Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T20:37:14.263393Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-20T20:37:14.263479Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-20T20:37:14.263515Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-20T20:37:14.263546Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-041399","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:47:58 up  3:30,  0 user,  load average: 0.74, 0.46, 0.74
	Linux functional-041399 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [14276181884b974bad6dd33b7519c2c1481bb18001c40442fa52352f2d5c30a4] <==
	I1120 20:45:55.027981       1 main.go:301] handling current node
	I1120 20:46:05.032756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:46:05.032793       1 main.go:301] handling current node
	I1120 20:46:15.027415       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:46:15.027464       1 main.go:301] handling current node
	I1120 20:46:25.026550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:46:25.026592       1 main.go:301] handling current node
	I1120 20:46:35.031438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:46:35.031476       1 main.go:301] handling current node
	I1120 20:46:45.027381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:46:45.027422       1 main.go:301] handling current node
	I1120 20:46:55.027637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:46:55.027675       1 main.go:301] handling current node
	I1120 20:47:05.032291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:47:05.032326       1 main.go:301] handling current node
	I1120 20:47:15.028316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:47:15.028353       1 main.go:301] handling current node
	I1120 20:47:25.028351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:47:25.028393       1 main.go:301] handling current node
	I1120 20:47:35.029354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:47:35.029388       1 main.go:301] handling current node
	I1120 20:47:45.027513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:47:45.027559       1 main.go:301] handling current node
	I1120 20:47:55.028773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:47:55.028809       1 main.go:301] handling current node
	
	
	==> kindnet [b07ba8ecdacce82ec63eb1466e14ccb3f4f52145413a24adf951ce771964e61e] <==
	I1120 20:36:23.506522       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 20:36:23.506794       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1120 20:36:23.506932       1 main.go:148] setting mtu 1500 for CNI 
	I1120 20:36:23.506948       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 20:36:23.506971       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T20:36:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 20:36:23.707595       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 20:36:23.707850       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 20:36:23.707866       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 20:36:23.708521       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 20:36:23.800997       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 20:36:23.805430       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 20:36:23.805429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1120 20:36:23.805429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 20:36:25.408641       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 20:36:25.408679       1 metrics.go:72] Registering metrics
	I1120 20:36:25.408738       1 controller.go:711] "Syncing nftables rules"
	I1120 20:36:33.709350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:36:33.709445       1 main.go:301] handling current node
	I1120 20:36:43.715090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:36:43.715130       1 main.go:301] handling current node
	I1120 20:36:53.711319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:36:53.711385       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c94424bbc390d5163c864d4c5bbbd46fb12c3850070d85f0521ed4b1ac20d977] <==
	I1120 20:37:17.658049       1 policy_source.go:240] refreshing policies
	I1120 20:37:17.754729       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:37:18.533453       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 20:37:18.556892       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1120 20:37:18.839095       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1120 20:37:18.840401       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:37:18.844793       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:37:19.379090       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:37:19.470208       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 20:37:19.519338       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 20:37:19.526452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 20:37:21.354036       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:37:35.636909       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.163.220"}
	I1120 20:37:40.537155       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.219.109"}
	I1120 20:37:41.765812       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.4.94"}
	I1120 20:37:44.050587       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.61.156"}
	E1120 20:37:55.679486       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37018: use of closed network connection
	E1120 20:37:56.647976       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37080: use of closed network connection
	I1120 20:37:56.938744       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.241.208"}
	E1120 20:38:07.371362       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41994: use of closed network connection
	I1120 20:38:12.821595       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 20:38:12.918794       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.109.123"}
	I1120 20:38:12.929862       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.156.140"}
	E1120 20:38:15.973963       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38338: use of closed network connection
	I1120 20:47:17.568604       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [58f669327cb94d75c60aa3bb45985606f1e1f22cb8c687eda9ec684421681d7e] <==
	I1120 20:37:05.238158       1 serving.go:386] Generated self-signed cert in-memory
	I1120 20:37:05.536999       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1120 20:37:05.537108       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:37:05.539300       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1120 20:37:05.539375       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1120 20:37:05.539701       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1120 20:37:05.539730       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1120 20:37:15.541620       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [5e18ae4a3559bae7b6b3348f46e56d1daa7d67185fe0d5f597ce3821edad7141] <==
	I1120 20:37:20.817993       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 20:37:20.850608       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 20:37:20.850658       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 20:37:20.850732       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:37:20.850732       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 20:37:20.850784       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 20:37:20.851039       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 20:37:20.853198       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:37:20.854837       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 20:37:20.855822       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:37:20.856982       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 20:37:20.857003       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 20:37:20.860954       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1120 20:37:20.899949       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:37:20.899976       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 20:37:20.899984       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 20:37:20.961390       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1120 20:38:12.870252       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.870782       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.876957       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.877270       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.884449       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.885072       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.890679       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1120 20:38:12.892903       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [54a53c308d5ff4ff06b6d6059572aa3a61f93dddcf5d077bd89dbed38f675401] <==
	I1120 20:36:23.355586       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:36:23.431187       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 20:36:23.532599       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:36:23.532634       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 20:36:23.532717       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:36:23.550652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:36:23.550701       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:36:23.555885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:36:23.556267       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:36:23.556302       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:36:23.558786       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:36:23.558803       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:36:23.558813       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:36:23.558823       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:36:23.558800       1 config.go:200] "Starting service config controller"
	I1120 20:36:23.558836       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:36:23.558855       1 config.go:309] "Starting node config controller"
	I1120 20:36:23.558861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:36:23.558869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:36:23.658969       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:36:23.659055       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:36:23.659107       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [fe0a9e1d158d4c162d0dd9cf13173e5d5c89ec7f1daa777e3d9aa63fb6a9fcf2] <==
	I1120 20:37:03.820837       1 config.go:200] "Starting service config controller"
	I1120 20:37:03.820855       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:37:03.820867       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:37:03.820884       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:37:03.820891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:37:03.820954       1 config.go:309] "Starting node config controller"
	I1120 20:37:03.820970       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:37:03.820979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1120 20:37:03.821959       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1120 20:37:03.822109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:37:03.822130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1120 20:37:03.822191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1120 20:37:05.169801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1120 20:37:05.356432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1120 20:37:05.391989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:37:06.986208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1120 20:37:07.037063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1120 20:37:07.823633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:37:11.659951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1120 20:37:11.719639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1120 20:37:13.636664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:37:15.923583       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I1120 20:37:21.122420       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:37:21.520995       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:37:23.521533       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [22e5a51176fe5b8797282eaa48e62b26592d7c9504e3b83f136aa4a879211eb0] <==
	E1120 20:36:14.877793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:36:14.877857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:36:14.877941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:36:14.878033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:36:14.878115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:36:14.878173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:36:14.878274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:36:14.878312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:36:15.789913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:36:15.804015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:36:15.814155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:36:15.923960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:36:15.929064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:36:15.993590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:36:16.025846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:36:16.045922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:36:16.051034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:36:16.219334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 20:36:18.573394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:37:03.639423       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:37:03.639556       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1120 20:37:03.639651       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1120 20:37:03.639737       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1120 20:37:03.639746       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1120 20:37:03.639764       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a672941a090b5553b8187c07be83c07fa6d3ac6eff360e8ffb79c55207c9faaa] <==
	E1120 20:37:09.579548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:37:09.780793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 20:37:10.002141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:37:10.045008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:37:10.180333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:37:12.546232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 20:37:12.649601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 20:37:12.973267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 20:37:13.112130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 20:37:13.554499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 20:37:13.595089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 20:37:13.757498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 20:37:13.846336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 20:37:13.861918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 20:37:13.868562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 20:37:13.914297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 20:37:13.961159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 20:37:14.048138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 20:37:14.066740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 20:37:14.471902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 20:37:14.597061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 20:37:15.026954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 20:37:15.258415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 20:37:15.678114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1120 20:37:24.321846       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:45:28 functional-041399 kubelet[4217]: E1120 20:45:28.522876    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:45:32 functional-041399 kubelet[4217]: E1120 20:45:32.522237    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:45:40 functional-041399 kubelet[4217]: E1120 20:45:40.522697    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:45:45 functional-041399 kubelet[4217]: E1120 20:45:45.522829    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:45:51 functional-041399 kubelet[4217]: E1120 20:45:51.522565    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:45:56 functional-041399 kubelet[4217]: E1120 20:45:56.522796    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:46:03 functional-041399 kubelet[4217]: E1120 20:46:03.523075    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:46:07 functional-041399 kubelet[4217]: E1120 20:46:07.523151    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:46:14 functional-041399 kubelet[4217]: E1120 20:46:14.522281    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:46:19 functional-041399 kubelet[4217]: E1120 20:46:19.524140    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:46:27 functional-041399 kubelet[4217]: E1120 20:46:27.523147    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:46:32 functional-041399 kubelet[4217]: E1120 20:46:32.522768    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:46:38 functional-041399 kubelet[4217]: E1120 20:46:38.522891    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:46:45 functional-041399 kubelet[4217]: E1120 20:46:45.522645    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:46:53 functional-041399 kubelet[4217]: E1120 20:46:53.523109    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:46:59 functional-041399 kubelet[4217]: E1120 20:46:59.522253    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:47:05 functional-041399 kubelet[4217]: E1120 20:47:05.522638    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:47:10 functional-041399 kubelet[4217]: E1120 20:47:10.522909    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:47:17 functional-041399 kubelet[4217]: E1120 20:47:17.522761    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:47:21 functional-041399 kubelet[4217]: E1120 20:47:21.522321    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:47:30 functional-041399 kubelet[4217]: E1120 20:47:30.522920    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:47:34 functional-041399 kubelet[4217]: E1120 20:47:34.522089    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:47:45 functional-041399 kubelet[4217]: E1120 20:47:45.522745    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-l4l2w" podUID="f6756e03-3dc9-48a1-b448-53a215d9e89c"
	Nov 20 20:47:45 functional-041399 kubelet[4217]: E1120 20:47:45.522882    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	Nov 20 20:47:57 functional-041399 kubelet[4217]: E1120 20:47:57.522271    4217 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bd8kz" podUID="6a606244-ba76-4b1a-94ca-56797ccbfd8d"
	
	
	==> kubernetes-dashboard [3d9353c518be3605065e69df711ac7cf61a9a3108b625b9c7c94f2f9d6e27d4f] <==
	2025/11/20 20:38:19 Starting overwatch
	2025/11/20 20:38:19 Using namespace: kubernetes-dashboard
	2025/11/20 20:38:19 Using in-cluster config to connect to apiserver
	2025/11/20 20:38:19 Using secret token for csrf signing
	2025/11/20 20:38:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 20:38:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 20:38:19 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 20:38:19 Generating JWE encryption key
	2025/11/20 20:38:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 20:38:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 20:38:19 Initializing JWE encryption key from synchronized object
	2025/11/20 20:38:19 Creating in-cluster Sidecar client
	2025/11/20 20:38:19 Successful request to sidecar
	2025/11/20 20:38:19 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [44cd661ba8d5e24d89efdf347c1df576fffc17d86f2aba777c7f91e1e3213a1d] <==
	W1120 20:47:33.399399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:35.402563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:35.407643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:37.411554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:37.415349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:39.418692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:39.423819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:41.427620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:41.431495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:43.435120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:43.439100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:45.441905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:45.445572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:47.448695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:47.452617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:49.455678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:49.461058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:51.464287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:51.468701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:53.472425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:53.476583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:55.479808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:55.485167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:57.488803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:47:57.493875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [716a8a402932919240bad17a04033c2269a07ee639f30aefcbe1780d48bf7bf0] <==
	W1120 20:36:38.437530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:40.441250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:40.445143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:42.447888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:42.451963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:44.456822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:44.463775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:46.467410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:46.471486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:48.475653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:48.479280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:50.482237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:50.488177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:52.491290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:52.495195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:54.499287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:54.503634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:56.506813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:56.510723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:58.514603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:36:58.520035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:37:00.523624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:37:00.528236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:37:02.531633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 20:37:02.536105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-041399 -n functional-041399
helpers_test.go:269: (dbg) Run:  kubectl --context functional-041399 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bd8kz hello-node-connect-7d85dfc575-l4l2w
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-041399 describe pod busybox-mount hello-node-75c85bcc94-bd8kz hello-node-connect-7d85dfc575-l4l2w
helpers_test.go:290: (dbg) kubectl --context functional-041399 describe pod busybox-mount hello-node-75c85bcc94-bd8kz hello-node-connect-7d85dfc575-l4l2w:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-041399/192.168.49.2
	Start Time:       Thu, 20 Nov 2025 20:38:01 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b8db860dc34e414963edd0490e81de0396ce9ea9f7103b16b38dc1d1e4998a23
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 20 Nov 2025 20:38:03 +0000
	      Finished:     Thu, 20 Nov 2025 20:38:03 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rr4qb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rr4qb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m58s  default-scheduler  Successfully assigned default/busybox-mount to functional-041399
	  Normal  Pulling    9m58s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.977s (1.977s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m56s  kubelet            Created container: mount-munger
	  Normal  Started    9m56s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bd8kz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-041399/192.168.49.2
	Start Time:       Thu, 20 Nov 2025 20:37:41 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxphn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxphn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bd8kz to functional-041399
	  Normal   Pulling    7m22s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x41 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x42 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-l4l2w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-041399/192.168.49.2
	Start Time:       Thu, 20 Nov 2025 20:37:56 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n84tn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n84tn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-l4l2w to functional-041399
	  Normal   Pulling    6m57s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m57s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m57s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    0s (x43 over 9m59s)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-041399 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-041399 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bd8kz" [6a606244-ba76-4b1a-94ca-56797ccbfd8d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-041399 -n functional-041399
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-20 20:47:42.111365689 +0000 UTC m=+1087.411385815
functional_test.go:1460: (dbg) Run:  kubectl --context functional-041399 describe po hello-node-75c85bcc94-bd8kz -n default
functional_test.go:1460: (dbg) kubectl --context functional-041399 describe po hello-node-75c85bcc94-bd8kz -n default:
Name:             hello-node-75c85bcc94-bd8kz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-041399/192.168.49.2
Start Time:       Thu, 20 Nov 2025 20:37:41 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxphn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lxphn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bd8kz to functional-041399
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m55s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m55s)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x19 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m27s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-041399 logs hello-node-75c85bcc94-bd8kz -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-041399 logs hello-node-75c85bcc94-bd8kz -n default: exit status 1 (67.840483ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bd8kz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-041399 logs hello-node-75c85bcc94-bd8kz -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image load --daemon kicbase/echo-server:functional-041399 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-041399" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image load --daemon kicbase/echo-server:functional-041399 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 image load --daemon kicbase/echo-server:functional-041399 --alsologtostderr: (1.87660809s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 image ls: (2.255386351s)
functional_test.go:461: expected "kicbase/echo-server:functional-041399" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-041399
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image load --daemon kicbase/echo-server:functional-041399 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-041399" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image save kicbase/echo-server:functional-041399 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1120 20:37:50.396771  289082 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:37:50.397054  289082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:37:50.397063  289082 out.go:374] Setting ErrFile to fd 2...
	I1120 20:37:50.397067  289082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:37:50.397256  289082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:37:50.397863  289082 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:37:50.397976  289082 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:37:50.398351  289082 cli_runner.go:164] Run: docker container inspect functional-041399 --format={{.State.Status}}
	I1120 20:37:50.416238  289082 ssh_runner.go:195] Run: systemctl --version
	I1120 20:37:50.416303  289082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-041399
	I1120 20:37:50.433864  289082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/functional-041399/id_rsa Username:docker}
	I1120 20:37:50.529542  289082 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1120 20:37:50.529616  289082 cache_images.go:255] Failed to load cached images for "functional-041399": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1120 20:37:50.529658  289082 cache_images.go:267] failed pushing to: functional-041399

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-041399
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image save --daemon kicbase/echo-server:functional-041399 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-041399
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-041399: exit status 1 (18.314315ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-041399

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-041399

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 service --namespace=default --https --url hello-node: exit status 115 (546.923755ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30393
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-041399 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 service hello-node --url --format={{.IP}}: exit status 115 (544.226148ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-041399 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 service hello-node --url: exit status 115 (541.716816ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30393
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-041399 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30393
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (424.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 stop --alsologtostderr -v 5
E1120 20:51:46.759466  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 stop --alsologtostderr -v 5: (45.492415579s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 start --wait true --alsologtostderr -v 5
E1120 20:52:40.581486  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:40.587954  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:40.599441  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:40.620950  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:40.662453  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:40.743969  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:40.905518  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:41.227256  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:41.869310  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:43.150976  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:45.713721  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:52:50.835278  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:53:01.077440  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:53:09.822410  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:53:21.559078  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:54:02.521515  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:55:24.443136  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:56:46.755472  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:57:40.578801  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:58:08.285286  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-922218 start --wait true --alsologtostderr -v 5: exit status 80 (6m16.842937743s)

                                                
                                                
-- stdout --
	* [ha-922218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-922218" primary control-plane node in "ha-922218" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-922218-m02" control-plane node in "ha-922218" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-922218-m03" control-plane node in "ha-922218" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-922218-m04" worker node in "ha-922218" cluster
	* Pulling base image v0.0.48-1763507788-21924 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:52:05.328764  323157 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:52:05.329077  323157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:05.329088  323157 out.go:374] Setting ErrFile to fd 2...
	I1120 20:52:05.329095  323157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:05.329358  323157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:52:05.329815  323157 out.go:368] Setting JSON to false
	I1120 20:52:05.330759  323157 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12867,"bootTime":1763659058,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:52:05.330873  323157 start.go:143] virtualization: kvm guest
	I1120 20:52:05.332897  323157 out.go:179] * [ha-922218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:52:05.334089  323157 notify.go:221] Checking for updates...
	I1120 20:52:05.334111  323157 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:52:05.335153  323157 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:52:05.336342  323157 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:05.337453  323157 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:52:05.338644  323157 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:52:05.339840  323157 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:52:05.341429  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:05.341547  323157 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:52:05.366166  323157 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:52:05.366337  323157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:05.429868  323157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-20 20:52:05.418170855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:05.429981  323157 docker.go:319] overlay module found
	I1120 20:52:05.432415  323157 out.go:179] * Using the docker driver based on existing profile
	I1120 20:52:05.433478  323157 start.go:309] selected driver: docker
	I1120 20:52:05.433497  323157 start.go:930] validating driver "docker" against &{Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:05.433601  323157 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:52:05.433679  323157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:05.497705  323157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-20 20:52:05.48528978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:05.498702  323157 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:05.498750  323157 cni.go:84] Creating CNI manager for ""
	I1120 20:52:05.498813  323157 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 20:52:05.498895  323157 start.go:353] cluster config:
	{Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:05.501099  323157 out.go:179] * Starting "ha-922218" primary control-plane node in "ha-922218" cluster
	I1120 20:52:05.502199  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:52:05.503398  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:05.504658  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:05.504699  323157 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:52:05.504719  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:52:05.504760  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:05.504824  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:52:05.504840  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:52:05.505023  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:05.527904  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:05.527929  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:05.527945  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:05.527985  323157 start.go:360] acquireMachinesLock for ha-922218: {Name:mk7973b5b3e2bce97a45ae60ce14811fb93a6808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:05.528045  323157 start.go:364] duration metric: took 37.272µs to acquireMachinesLock for "ha-922218"
	I1120 20:52:05.528067  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:52:05.528078  323157 fix.go:54] fixHost starting: 
	I1120 20:52:05.528385  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:52:05.546149  323157 fix.go:112] recreateIfNeeded on ha-922218: state=Stopped err=<nil>
	W1120 20:52:05.546186  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:52:05.548148  323157 out.go:252] * Restarting existing docker container for "ha-922218" ...
	I1120 20:52:05.548228  323157 cli_runner.go:164] Run: docker start ha-922218
	I1120 20:52:05.829297  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:52:05.854267  323157 kic.go:430] container "ha-922218" state is running.
	I1120 20:52:05.854754  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:05.879797  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:05.880184  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:05.880316  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:05.902671  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:05.902972  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:05.902987  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:05.903785  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36002->127.0.0.1:32808: read: connection reset by peer
	I1120 20:52:09.038413  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218
	
	I1120 20:52:09.038466  323157 ubuntu.go:182] provisioning hostname "ha-922218"
	I1120 20:52:09.038538  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:09.056776  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:09.057040  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:09.057057  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218 && echo "ha-922218" | sudo tee /etc/hostname
	I1120 20:52:09.198987  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218
	
	I1120 20:52:09.199094  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:09.218187  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:09.218484  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:09.218518  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:09.350283  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:09.350320  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:52:09.350371  323157 ubuntu.go:190] setting up certificates
	I1120 20:52:09.350386  323157 provision.go:84] configureAuth start
	I1120 20:52:09.350452  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:09.368706  323157 provision.go:143] copyHostCerts
	I1120 20:52:09.368743  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:09.368777  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:52:09.368790  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:09.368861  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:52:09.368944  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:09.368963  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:52:09.368970  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:09.368996  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:52:09.369044  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:09.369060  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:52:09.369066  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:09.369089  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:52:09.369139  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218 san=[127.0.0.1 192.168.49.2 ha-922218 localhost minikube]
	I1120 20:52:10.061446  323157 provision.go:177] copyRemoteCerts
	I1120 20:52:10.061522  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:10.061563  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.080281  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.175628  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:52:10.175687  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1120 20:52:10.193744  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:52:10.193807  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:52:10.211340  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:52:10.211404  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:10.229048  323157 provision.go:87] duration metric: took 878.645023ms to configureAuth
	I1120 20:52:10.229077  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:10.229298  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:10.229423  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.247922  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.248191  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:10.248210  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:52:10.573365  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:52:10.573392  323157 machine.go:97] duration metric: took 4.693182802s to provisionDockerMachine
	I1120 20:52:10.573407  323157 start.go:293] postStartSetup for "ha-922218" (driver="docker")
	I1120 20:52:10.573426  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:10.573499  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:10.573553  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.593733  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.690092  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:10.693995  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:10.694023  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:10.694034  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:52:10.694094  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:52:10.694185  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:52:10.694199  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:52:10.694322  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:10.702399  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:10.721119  323157 start.go:296] duration metric: took 147.693408ms for postStartSetup
	I1120 20:52:10.721235  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:10.721282  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.739969  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.833630  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:10.838327  323157 fix.go:56] duration metric: took 5.310241763s for fixHost
	I1120 20:52:10.838357  323157 start.go:83] releasing machines lock for "ha-922218", held for 5.310298505s
	I1120 20:52:10.838432  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:10.856719  323157 ssh_runner.go:195] Run: cat /version.json
	I1120 20:52:10.856760  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:10.856779  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.856845  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.876456  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.876715  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:11.025514  323157 ssh_runner.go:195] Run: systemctl --version
	I1120 20:52:11.032462  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:52:11.068010  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:11.072912  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:11.072991  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:11.081063  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:52:11.081087  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:52:11.081118  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:11.081168  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:52:11.095970  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:52:11.108445  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:11.108509  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:11.123137  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:11.135601  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:11.213922  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:11.297509  323157 docker.go:234] disabling docker service ...
	I1120 20:52:11.297579  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:11.312344  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:11.324558  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:11.404570  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:11.482324  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:11.495121  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:11.509896  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:52:11.509955  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.519009  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:52:11.519074  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.528081  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.536889  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.546294  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:11.554800  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.563861  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.572378  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.581389  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:11.589599  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:11.597300  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:11.674297  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:52:11.817850  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:52:11.817928  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:52:11.822052  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:52:11.822102  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:52:11.826068  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:52:11.851404  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:52:11.851494  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:52:11.879770  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:52:11.909889  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:52:11.911081  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:11.928829  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:52:11.933285  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:11.944894  323157 kubeadm.go:884] updating cluster {Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:52:11.945069  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:11.945159  323157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:11.979530  323157 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:52:11.979551  323157 crio.go:433] Images already preloaded, skipping extraction
	I1120 20:52:11.979599  323157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:12.008103  323157 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:52:12.008127  323157 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:52:12.008135  323157 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 20:52:12.008259  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:52:12.008342  323157 ssh_runner.go:195] Run: crio config
	I1120 20:52:12.053953  323157 cni.go:84] Creating CNI manager for ""
	I1120 20:52:12.053974  323157 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 20:52:12.053990  323157 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:52:12.054013  323157 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-922218 NodeName:ha-922218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:52:12.054128  323157 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-922218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:52:12.054146  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:52:12.054186  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:52:12.067315  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:52:12.067457  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:52:12.067537  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:12.075923  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:52:12.076002  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 20:52:12.083739  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 20:52:12.096285  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:52:12.109031  323157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1120 20:52:12.121723  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:52:12.134083  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:52:12.137866  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:12.148115  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:12.228004  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:12.251717  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.2
	I1120 20:52:12.251748  323157 certs.go:195] generating shared ca certs ...
	I1120 20:52:12.251770  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.251938  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:52:12.251981  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:52:12.251992  323157 certs.go:257] generating profile certs ...
	I1120 20:52:12.252071  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:52:12.252098  323157 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09
	I1120 20:52:12.252119  323157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1120 20:52:12.330376  323157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 ...
	I1120 20:52:12.330417  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09: {Name:mk6b74f2e5931344472166b62a32edaf4f45744b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.330619  323157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09 ...
	I1120 20:52:12.330655  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09: {Name:mk229093d7281b814de77a27daa6f3543e470a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.330779  323157 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt
	I1120 20:52:12.330974  323157 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key
	I1120 20:52:12.331167  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:52:12.331190  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:52:12.331230  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:52:12.331254  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:52:12.331277  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:52:12.331295  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:52:12.331313  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:52:12.331331  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:52:12.331349  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:52:12.331428  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:52:12.331475  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:52:12.331490  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:52:12.331519  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:52:12.331552  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:52:12.331587  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:52:12.331662  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:12.331712  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.331735  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.331750  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.332594  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:52:12.353047  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:52:12.370168  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:52:12.387211  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:52:12.405559  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:52:12.422666  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:52:12.441539  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:52:12.460737  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:52:12.479326  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:52:12.497570  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:52:12.515902  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:52:12.534796  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:52:12.548189  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:52:12.554678  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.562462  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:52:12.570059  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.573962  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.574018  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.607754  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:12.615941  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.623665  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:52:12.632109  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.636168  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.636242  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.670187  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:52:12.678284  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.685973  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:52:12.693528  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.697235  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.697293  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.731035  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:52:12.738959  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:52:12.742968  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:52:12.789124  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:52:12.832435  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:52:12.886449  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:52:12.943193  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:52:12.978550  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:52:13.013640  323157 kubeadm.go:401] StartCluster: {Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:13.013797  323157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:52:13.013859  323157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:52:13.049696  323157 cri.go:89] found id: "65d1e3fad6b2daa0d2eb48dff43ccc96c150434dda9afd9eeaf84004fee7ace3"
	I1120 20:52:13.049721  323157 cri.go:89] found id: "8b6d87aa881c9d7ce48cf020cc5a82bcd71165681bd09bdbef589896ef08b244"
	I1120 20:52:13.049727  323157 cri.go:89] found id: "406607e74d1618ca02cbf22003052ea65983c0e1235732ec547478bff625b9ff"
	I1120 20:52:13.049732  323157 cri.go:89] found id: "9e882a89de870c006dd62af4f419f69f18af696b07ee1686b859a279092e03e0"
	I1120 20:52:13.049737  323157 cri.go:89] found id: "45a868d0ee3cc88db4f8ceed46d0f4eddce85b589457dcbb93848dd871b099bf"
	I1120 20:52:13.049741  323157 cri.go:89] found id: ""
	I1120 20:52:13.049788  323157 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 20:52:13.062401  323157 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1120 20:52:13.062470  323157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:52:13.070809  323157 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 20:52:13.070832  323157 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 20:52:13.070881  323157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 20:52:13.078757  323157 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:52:13.079306  323157 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-922218" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:13.079441  323157 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "ha-922218" cluster setting kubeconfig missing "ha-922218" context setting]
	I1120 20:52:13.079865  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.080582  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 20:52:13.081160  323157 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 20:52:13.081177  323157 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 20:52:13.081183  323157 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 20:52:13.081188  323157 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 20:52:13.081196  323157 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 20:52:13.081252  323157 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 20:52:13.081712  323157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 20:52:13.089447  323157 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 20:52:13.089467  323157 kubeadm.go:602] duration metric: took 18.629525ms to restartPrimaryControlPlane
	I1120 20:52:13.089478  323157 kubeadm.go:403] duration metric: took 75.851486ms to StartCluster
	I1120 20:52:13.089496  323157 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.089563  323157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:13.090205  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.090465  323157 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:52:13.090490  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:52:13.090499  323157 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:13.090755  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:13.093327  323157 out.go:179] * Enabled addons: 
	I1120 20:52:13.094381  323157 addons.go:515] duration metric: took 3.879805ms for enable addons: enabled=[]
	I1120 20:52:13.094412  323157 start.go:247] waiting for cluster config update ...
	I1120 20:52:13.094424  323157 start.go:256] writing updated cluster config ...
	I1120 20:52:13.095823  323157 out.go:203] 
	I1120 20:52:13.097078  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:13.097195  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.098642  323157 out.go:179] * Starting "ha-922218-m02" control-plane node in "ha-922218" cluster
	I1120 20:52:13.099780  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:52:13.101045  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:13.102201  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:13.102233  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:52:13.102244  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:13.102316  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:52:13.102330  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:52:13.102446  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.124350  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:13.124372  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:13.124388  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:13.124422  323157 start.go:360] acquireMachinesLock for ha-922218-m02: {Name:mk327cff0c42e8fe5ded9f6386acc07315d39a09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:13.124488  323157 start.go:364] duration metric: took 45.103µs to acquireMachinesLock for "ha-922218-m02"
	I1120 20:52:13.124508  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:52:13.124518  323157 fix.go:54] fixHost starting: m02
	I1120 20:52:13.124771  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:52:13.143934  323157 fix.go:112] recreateIfNeeded on ha-922218-m02: state=Stopped err=<nil>
	W1120 20:52:13.143964  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:52:13.149354  323157 out.go:252] * Restarting existing docker container for "ha-922218-m02" ...
	I1120 20:52:13.149455  323157 cli_runner.go:164] Run: docker start ha-922218-m02
	I1120 20:52:13.461778  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:52:13.484306  323157 kic.go:430] container "ha-922218-m02" state is running.
	I1120 20:52:13.484763  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:13.505868  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.506112  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:13.506167  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:13.526643  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:13.526854  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:13.526866  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:13.527491  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45258->127.0.0.1:32813: read: connection reset by peer
	I1120 20:52:16.660479  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m02
	
	I1120 20:52:16.660511  323157 ubuntu.go:182] provisioning hostname "ha-922218-m02"
	I1120 20:52:16.660584  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:16.679969  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:16.680183  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:16.680195  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m02 && echo "ha-922218-m02" | sudo tee /etc/hostname
	I1120 20:52:16.821890  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m02
	
	I1120 20:52:16.821965  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:16.839799  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:16.840017  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:16.840033  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:16.971112  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:16.971145  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:52:16.971166  323157 ubuntu.go:190] setting up certificates
	I1120 20:52:16.971179  323157 provision.go:84] configureAuth start
	I1120 20:52:16.971279  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:16.989488  323157 provision.go:143] copyHostCerts
	I1120 20:52:16.989529  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:16.989560  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:52:16.989569  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:16.989635  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:52:16.989719  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:16.989738  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:52:16.989744  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:16.989770  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:52:16.989870  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:16.989892  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:52:16.989898  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:16.989924  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:52:16.989977  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m02 san=[127.0.0.1 192.168.49.3 ha-922218-m02 localhost minikube]
	I1120 20:52:18.325243  323157 provision.go:177] copyRemoteCerts
	I1120 20:52:18.325315  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:18.325359  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.349476  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:18.454303  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:52:18.454394  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:52:18.479542  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:52:18.479667  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:52:18.500104  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:52:18.500180  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:18.518171  323157 provision.go:87] duration metric: took 1.546978244s to configureAuth
	I1120 20:52:18.518200  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:18.518425  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:18.518527  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.537190  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:18.537424  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:18.537440  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:52:18.895794  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:52:18.895824  323157 machine.go:97] duration metric: took 5.389701302s to provisionDockerMachine
	I1120 20:52:18.895839  323157 start.go:293] postStartSetup for "ha-922218-m02" (driver="docker")
	I1120 20:52:18.895853  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:18.895988  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:18.896049  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.917397  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.017957  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:19.023501  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:19.023526  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:19.023536  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:52:19.023581  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:52:19.023657  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:52:19.023667  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:52:19.023756  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:19.033501  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:19.054171  323157 start.go:296] duration metric: took 158.315421ms for postStartSetup
	I1120 20:52:19.054290  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:19.054332  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.076545  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.179900  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:19.186175  323157 fix.go:56] duration metric: took 6.061648548s for fixHost
	I1120 20:52:19.186235  323157 start.go:83] releasing machines lock for "ha-922218-m02", held for 6.061714164s
	I1120 20:52:19.186321  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:19.212036  323157 out.go:179] * Found network options:
	I1120 20:52:19.213348  323157 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 20:52:19.214893  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:52:19.214943  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:52:19.215032  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:52:19.215091  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.215108  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:19.215187  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.241015  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.241538  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.437902  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:19.444586  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:19.444668  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:19.455464  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:52:19.455492  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:52:19.455532  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:19.455584  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:52:19.479915  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:52:19.496789  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:19.496839  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:19.512753  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:19.525991  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:19.636269  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:19.743860  323157 docker.go:234] disabling docker service ...
	I1120 20:52:19.743937  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:19.758942  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:19.771625  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:19.879756  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:19.984607  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:19.997908  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:20.012508  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:52:20.012564  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.021752  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:52:20.021808  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.031377  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.041137  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.050156  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:20.058809  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.068260  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.078190  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.087650  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:20.095104  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:20.102596  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:20.245093  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:53:50.500050  323157 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.254907746s)
	I1120 20:53:50.500099  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:53:50.500170  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:53:50.504526  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:53:50.504579  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:53:50.508365  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:53:50.534774  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:53:50.534864  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:53:50.562018  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:53:50.592115  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:53:50.593411  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:53:50.594685  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:53:50.612868  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:53:50.617151  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:53:50.628089  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:53:50.628365  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:53:50.628586  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:53:50.646653  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:53:50.646897  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.3
	I1120 20:53:50.646915  323157 certs.go:195] generating shared ca certs ...
	I1120 20:53:50.646931  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:53:50.647073  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:53:50.647108  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:53:50.647117  323157 certs.go:257] generating profile certs ...
	I1120 20:53:50.647209  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:53:50.647303  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c836c87f
	I1120 20:53:50.647340  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:53:50.647354  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:53:50.647371  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:53:50.647384  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:53:50.647397  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:53:50.647409  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:53:50.647421  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:53:50.647433  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:53:50.647458  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:53:50.647511  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:53:50.647546  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:53:50.647555  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:53:50.647579  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:53:50.647605  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:53:50.647625  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:53:50.647667  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:53:50.647693  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:53:50.647706  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:50.647719  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:53:50.647768  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:53:50.665659  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:53:50.755584  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 20:53:50.760041  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 20:53:50.768729  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 20:53:50.772558  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 20:53:50.781784  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 20:53:50.785575  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 20:53:50.794334  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 20:53:50.798078  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 20:53:50.807321  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 20:53:50.811305  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 20:53:50.819736  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 20:53:50.823350  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 20:53:50.831741  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:53:50.849848  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:53:50.867486  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:53:50.884818  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:53:50.902061  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:53:50.919790  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:53:50.937569  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:53:50.955443  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:53:50.972778  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:53:50.990638  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:53:51.008199  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:53:51.026275  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 20:53:51.039905  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 20:53:51.054001  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 20:53:51.068159  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 20:53:51.083445  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 20:53:51.096696  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 20:53:51.109424  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 20:53:51.122677  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:53:51.129308  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.137038  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:53:51.144950  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.148713  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.148764  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.183638  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:53:51.192271  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.199701  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:53:51.207336  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.211049  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.211109  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.247556  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:53:51.255756  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.263373  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:53:51.270762  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.274831  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.274886  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.310488  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:53:51.318664  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:53:51.322469  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:53:51.356447  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:53:51.390490  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:53:51.424733  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:53:51.459076  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:53:51.492960  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:53:51.527319  323157 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 20:53:51.527454  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:53:51.527485  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:53:51.527542  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:53:51.541450  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:53:51.541513  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:53:51.541572  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:53:51.549762  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:53:51.549835  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 20:53:51.558197  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:53:51.572021  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:53:51.585070  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:53:51.597674  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:53:51.601380  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:53:51.611235  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:53:51.721067  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:53:51.734155  323157 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:53:51.734528  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:53:51.736279  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:53:51.737724  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:53:51.846124  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:53:51.859674  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:53:51.859761  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:53:51.860000  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m02" to be "Ready" ...
	W1120 20:53:53.863125  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	W1120 20:53:55.863446  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	W1120 20:53:57.863942  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	I1120 20:54:00.364328  323157 node_ready.go:49] node "ha-922218-m02" is "Ready"
	I1120 20:54:00.364359  323157 node_ready.go:38] duration metric: took 8.504330619s for node "ha-922218-m02" to be "Ready" ...
	I1120 20:54:00.364381  323157 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:54:00.364433  323157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:54:00.376821  323157 api_server.go:72] duration metric: took 8.642616301s to wait for apiserver process to appear ...
	I1120 20:54:00.376853  323157 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:54:00.376887  323157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:54:00.381080  323157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:54:00.382023  323157 api_server.go:141] control plane version: v1.34.1
	I1120 20:54:00.382047  323157 api_server.go:131] duration metric: took 5.187881ms to wait for apiserver health ...
	I1120 20:54:00.382059  323157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:54:00.388374  323157 system_pods.go:59] 26 kube-system pods found
	I1120 20:54:00.388402  323157 system_pods.go:61] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:00.388407  323157 system_pods.go:61] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:00.388410  323157 system_pods.go:61] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:00.388414  323157 system_pods.go:61] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:00.388417  323157 system_pods.go:61] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running
	I1120 20:54:00.388422  323157 system_pods.go:61] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:00.388425  323157 system_pods.go:61] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:00.388428  323157 system_pods.go:61] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:00.388435  323157 system_pods.go:61] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:00.388440  323157 system_pods.go:61] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:00.388445  323157 system_pods.go:61] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:00.388448  323157 system_pods.go:61] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running
	I1120 20:54:00.388453  323157 system_pods.go:61] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:00.388461  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:00.388465  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running
	I1120 20:54:00.388468  323157 system_pods.go:61] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:00.388473  323157 system_pods.go:61] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:00.388479  323157 system_pods.go:61] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:00.388482  323157 system_pods.go:61] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:00.388485  323157 system_pods.go:61] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:00.388491  323157 system_pods.go:61] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:00.388494  323157 system_pods.go:61] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running
	I1120 20:54:00.388496  323157 system_pods.go:61] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:00.388499  323157 system_pods.go:61] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:00.388502  323157 system_pods.go:61] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:00.388505  323157 system_pods.go:61] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:00.388510  323157 system_pods.go:74] duration metric: took 6.446272ms to wait for pod list to return data ...
	I1120 20:54:00.388517  323157 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:54:00.391628  323157 default_sa.go:45] found service account: "default"
	I1120 20:54:00.391650  323157 default_sa.go:55] duration metric: took 3.127505ms for default service account to be created ...
	I1120 20:54:00.391659  323157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:54:00.397448  323157 system_pods.go:86] 26 kube-system pods found
	I1120 20:54:00.397474  323157 system_pods.go:89] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:00.397480  323157 system_pods.go:89] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:00.397484  323157 system_pods.go:89] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:00.397487  323157 system_pods.go:89] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:00.397491  323157 system_pods.go:89] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running
	I1120 20:54:00.397495  323157 system_pods.go:89] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:00.397498  323157 system_pods.go:89] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:00.397501  323157 system_pods.go:89] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:00.397507  323157 system_pods.go:89] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:00.397515  323157 system_pods.go:89] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:00.397519  323157 system_pods.go:89] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:00.397523  323157 system_pods.go:89] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running
	I1120 20:54:00.397528  323157 system_pods.go:89] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:00.397534  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:00.397537  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running
	I1120 20:54:00.397542  323157 system_pods.go:89] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:00.397546  323157 system_pods.go:89] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:00.397550  323157 system_pods.go:89] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:00.397553  323157 system_pods.go:89] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:00.397556  323157 system_pods.go:89] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:00.397559  323157 system_pods.go:89] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:00.397564  323157 system_pods.go:89] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running
	I1120 20:54:00.397567  323157 system_pods.go:89] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:00.397569  323157 system_pods.go:89] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:00.397574  323157 system_pods.go:89] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:00.397577  323157 system_pods.go:89] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:00.397584  323157 system_pods.go:126] duration metric: took 5.920412ms to wait for k8s-apps to be running ...
	I1120 20:54:00.397590  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:00.397634  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:00.411201  323157 system_svc.go:56] duration metric: took 13.597746ms WaitForService to wait for kubelet
	I1120 20:54:00.411248  323157 kubeadm.go:587] duration metric: took 8.677048036s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:00.411276  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:00.415079  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415110  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415124  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415127  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415131  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415134  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415137  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415140  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415143  323157 node_conditions.go:105] duration metric: took 3.862735ms to run NodePressure ...
	I1120 20:54:00.415156  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:00.415179  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:00.416940  323157 out.go:203] 
	I1120 20:54:00.418262  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:00.418361  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.420050  323157 out.go:179] * Starting "ha-922218-m03" control-plane node in "ha-922218" cluster
	I1120 20:54:00.421459  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:54:00.422633  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:54:00.423753  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:54:00.423776  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:54:00.423854  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:54:00.423922  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:54:00.423940  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:54:00.424083  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.445274  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:54:00.445296  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:54:00.445313  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:54:00.445346  323157 start.go:360] acquireMachinesLock for ha-922218-m03: {Name:mk2f097c0ed961dc411b64ff8718e82c63bed499 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:54:00.445404  323157 start.go:364] duration metric: took 37.644µs to acquireMachinesLock for "ha-922218-m03"
	I1120 20:54:00.445429  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:54:00.445440  323157 fix.go:54] fixHost starting: m03
	I1120 20:54:00.445721  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:54:00.464059  323157 fix.go:112] recreateIfNeeded on ha-922218-m03: state=Stopped err=<nil>
	W1120 20:54:00.464096  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:54:00.465782  323157 out.go:252] * Restarting existing docker container for "ha-922218-m03" ...
	I1120 20:54:00.465877  323157 cli_runner.go:164] Run: docker start ha-922218-m03
	I1120 20:54:00.752312  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:54:00.772989  323157 kic.go:430] container "ha-922218-m03" state is running.
	I1120 20:54:00.773519  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:00.792599  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.792864  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:54:00.792955  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:00.811862  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:00.812107  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:00.812122  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:54:00.812859  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51650->127.0.0.1:32818: read: connection reset by peer
	I1120 20:54:03.944569  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m03
	
	I1120 20:54:03.944604  323157 ubuntu.go:182] provisioning hostname "ha-922218-m03"
	I1120 20:54:03.944668  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:03.962694  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:03.962979  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:03.963001  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m03 && echo "ha-922218-m03" | sudo tee /etc/hostname
	I1120 20:54:04.105497  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m03
	
	I1120 20:54:04.105607  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.123058  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:04.123306  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:04.123324  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:54:04.258245  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:54:04.258278  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:54:04.258296  323157 ubuntu.go:190] setting up certificates
	I1120 20:54:04.258308  323157 provision.go:84] configureAuth start
	I1120 20:54:04.258362  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:04.279610  323157 provision.go:143] copyHostCerts
	I1120 20:54:04.279658  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:04.279700  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:54:04.279713  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:04.279830  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:54:04.279954  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:04.279983  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:54:04.279994  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:04.280037  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:54:04.280114  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:04.280137  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:54:04.280143  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:04.280182  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:54:04.280275  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m03 san=[127.0.0.1 192.168.49.4 ha-922218-m03 localhost minikube]
	I1120 20:54:04.594873  323157 provision.go:177] copyRemoteCerts
	I1120 20:54:04.594949  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:54:04.595006  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.620652  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:04.724930  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:54:04.724996  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:54:04.744735  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:54:04.744808  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:54:04.767156  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:54:04.767237  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:54:04.786208  323157 provision.go:87] duration metric: took 527.885771ms to configureAuth
	I1120 20:54:04.786260  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:54:04.786486  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:04.786596  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.804998  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:04.805211  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:04.805245  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:54:05.142154  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:54:05.142184  323157 machine.go:97] duration metric: took 4.349303942s to provisionDockerMachine
	I1120 20:54:05.142196  323157 start.go:293] postStartSetup for "ha-922218-m03" (driver="docker")
	I1120 20:54:05.142207  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:54:05.142302  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:54:05.142352  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.161336  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.258512  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:54:05.262505  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:54:05.262541  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:54:05.262557  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:54:05.262619  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:54:05.262714  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:54:05.262726  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:54:05.262809  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:54:05.270992  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:05.290254  323157 start.go:296] duration metric: took 148.013138ms for postStartSetup
	I1120 20:54:05.290349  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:54:05.290395  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.312238  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.418404  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:54:05.424662  323157 fix.go:56] duration metric: took 4.979214262s for fixHost
	I1120 20:54:05.424693  323157 start.go:83] releasing machines lock for "ha-922218-m03", held for 4.979275228s
	I1120 20:54:05.424774  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:05.448969  323157 out.go:179] * Found network options:
	I1120 20:54:05.450451  323157 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 20:54:05.453201  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453264  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453295  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453313  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:54:05.453406  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:54:05.453469  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.453486  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:54:05.453555  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.475420  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.475725  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.630989  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:54:05.636113  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:54:05.636175  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:54:05.644977  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:54:05.645012  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:54:05.645047  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:54:05.645097  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:54:05.661262  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:54:05.674425  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:54:05.674494  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:54:05.689725  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:54:05.702759  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:54:05.825858  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:54:05.942569  323157 docker.go:234] disabling docker service ...
	I1120 20:54:05.942658  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:54:05.958482  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:54:05.972123  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:54:06.094822  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:54:06.215707  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:54:06.229448  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:54:06.245084  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:54:06.245154  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.254965  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:54:06.255020  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.265259  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.275476  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.285519  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:54:06.294777  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.304916  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.313870  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.322957  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:54:06.330497  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:54:06.338069  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:06.450575  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:54:06.648124  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:54:06.648243  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:54:06.653061  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:54:06.653129  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:54:06.657494  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:54:06.699746  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:54:06.699846  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:06.736255  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:06.768946  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:54:06.770257  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:54:06.771411  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 20:54:06.772594  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:54:06.792494  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:54:06.797451  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:06.810322  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:54:06.810733  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:06.811056  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:54:06.832939  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:54:06.833235  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.4
	I1120 20:54:06.833251  323157 certs.go:195] generating shared ca certs ...
	I1120 20:54:06.833270  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:54:06.833418  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:54:06.833458  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:54:06.833467  323157 certs.go:257] generating profile certs ...
	I1120 20:54:06.833538  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:54:06.833595  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.8321a6cf
	I1120 20:54:06.833629  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:54:06.833641  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:54:06.833655  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:54:06.833667  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:54:06.833679  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:54:06.833691  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:54:06.833704  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:54:06.833716  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:54:06.833730  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:54:06.833780  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:54:06.833808  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:54:06.833818  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:54:06.833838  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:54:06.833859  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:54:06.833880  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:54:06.833917  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:06.833947  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:54:06.833959  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:54:06.833973  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:06.834021  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:54:06.855612  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:54:06.947569  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 20:54:06.951943  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 20:54:06.960328  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 20:54:06.963907  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 20:54:06.972305  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 20:54:06.975879  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 20:54:06.984275  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 20:54:06.987841  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 20:54:06.995987  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 20:54:06.999744  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 20:54:07.008281  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 20:54:07.011963  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 20:54:07.020131  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:54:07.038787  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:54:07.058870  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:54:07.076347  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:54:07.093829  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:54:07.111361  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:54:07.133151  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:54:07.155916  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:54:07.176755  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:54:07.200109  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:54:07.222203  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:54:07.243966  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 20:54:07.260671  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 20:54:07.277366  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 20:54:07.293185  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 20:54:07.309452  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 20:54:07.324432  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 20:54:07.339188  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 20:54:07.353766  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:54:07.359885  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.367247  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:54:07.374693  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.378281  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.378337  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.415439  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:54:07.423662  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.431392  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:54:07.439351  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.442939  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.442985  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.477391  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:54:07.485472  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.493309  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:54:07.500900  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.504615  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.504678  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.540459  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:54:07.548510  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:54:07.552608  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:54:07.587157  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:54:07.623309  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:54:07.659308  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:54:07.694048  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:54:07.730482  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:54:07.766483  323157 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1120 20:54:07.766598  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:54:07.766625  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:54:07.766666  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:54:07.780008  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:54:07.780076  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:54:07.780149  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:54:07.788134  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:54:07.788227  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 20:54:07.796010  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:54:07.808930  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:54:07.821862  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:54:07.834855  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:54:07.838597  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:07.850360  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:07.963081  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:07.976660  323157 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:54:07.976968  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:07.979321  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:54:07.980344  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:08.088528  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:08.102382  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:54:08.102458  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:54:08.102723  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m03" to be "Ready" ...
	I1120 20:54:08.105908  323157 node_ready.go:49] node "ha-922218-m03" is "Ready"
	I1120 20:54:08.105930  323157 node_ready.go:38] duration metric: took 3.189835ms for node "ha-922218-m03" to be "Ready" ...
	I1120 20:54:08.105943  323157 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:54:08.105984  323157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:54:08.117937  323157 api_server.go:72] duration metric: took 141.218493ms to wait for apiserver process to appear ...
	I1120 20:54:08.117959  323157 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:54:08.117974  323157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:54:08.122063  323157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:54:08.123003  323157 api_server.go:141] control plane version: v1.34.1
	I1120 20:54:08.123025  323157 api_server.go:131] duration metric: took 5.061002ms to wait for apiserver health ...
	I1120 20:54:08.123033  323157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:54:08.128879  323157 system_pods.go:59] 26 kube-system pods found
	I1120 20:54:08.128913  323157 system_pods.go:61] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:08.128922  323157 system_pods.go:61] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:08.128934  323157 system_pods.go:61] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:08.128940  323157 system_pods.go:61] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:08.128953  323157 system_pods.go:61] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:54:08.128958  323157 system_pods.go:61] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:08.128965  323157 system_pods.go:61] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:08.128968  323157 system_pods.go:61] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:08.128973  323157 system_pods.go:61] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:08.128980  323157 system_pods.go:61] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:08.128984  323157 system_pods.go:61] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:08.128988  323157 system_pods.go:61] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:54:08.128993  323157 system_pods.go:61] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:08.128997  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:08.129005  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:54:08.129009  323157 system_pods.go:61] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:08.129016  323157 system_pods.go:61] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:08.129020  323157 system_pods.go:61] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:08.129026  323157 system_pods.go:61] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:08.129029  323157 system_pods.go:61] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:08.129032  323157 system_pods.go:61] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:08.129036  323157 system_pods.go:61] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:54:08.129042  323157 system_pods.go:61] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:08.129045  323157 system_pods.go:61] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:08.129047  323157 system_pods.go:61] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:08.129050  323157 system_pods.go:61] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:08.129056  323157 system_pods.go:74] duration metric: took 6.018012ms to wait for pod list to return data ...
	I1120 20:54:08.129064  323157 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:54:08.131679  323157 default_sa.go:45] found service account: "default"
	I1120 20:54:08.131697  323157 default_sa.go:55] duration metric: took 2.627778ms for default service account to be created ...
	I1120 20:54:08.131713  323157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:54:08.136580  323157 system_pods.go:86] 26 kube-system pods found
	I1120 20:54:08.136605  323157 system_pods.go:89] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:08.136610  323157 system_pods.go:89] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:08.136614  323157 system_pods.go:89] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:08.136617  323157 system_pods.go:89] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:08.136625  323157 system_pods.go:89] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:54:08.136629  323157 system_pods.go:89] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:08.136637  323157 system_pods.go:89] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:08.136642  323157 system_pods.go:89] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:08.136647  323157 system_pods.go:89] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:08.136652  323157 system_pods.go:89] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:08.136656  323157 system_pods.go:89] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:08.136661  323157 system_pods.go:89] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:54:08.136666  323157 system_pods.go:89] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:08.136670  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:08.136676  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:54:08.136680  323157 system_pods.go:89] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:08.136685  323157 system_pods.go:89] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:08.136689  323157 system_pods.go:89] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:08.136693  323157 system_pods.go:89] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:08.136696  323157 system_pods.go:89] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:08.136710  323157 system_pods.go:89] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:08.136718  323157 system_pods.go:89] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:54:08.136721  323157 system_pods.go:89] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:08.136724  323157 system_pods.go:89] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:08.136727  323157 system_pods.go:89] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:08.136730  323157 system_pods.go:89] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:08.136739  323157 system_pods.go:126] duration metric: took 5.020694ms to wait for k8s-apps to be running ...
	I1120 20:54:08.136745  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:08.136787  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:08.150040  323157 system_svc.go:56] duration metric: took 13.283775ms WaitForService to wait for kubelet
	I1120 20:54:08.150069  323157 kubeadm.go:587] duration metric: took 173.353654ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:08.150089  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:08.153814  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153839  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153854  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153860  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153866  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153871  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153876  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153888  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153894  323157 node_conditions.go:105] duration metric: took 3.799942ms to run NodePressure ...
	I1120 20:54:08.153910  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:08.153941  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:08.155986  323157 out.go:203] 
	I1120 20:54:08.157318  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:08.157412  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.158743  323157 out.go:179] * Starting "ha-922218-m04" worker node in "ha-922218" cluster
	I1120 20:54:08.159836  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:54:08.160869  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:54:08.161862  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:54:08.161877  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:54:08.161937  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:54:08.161978  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:54:08.161992  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:54:08.162094  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.182859  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:54:08.182880  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:54:08.182897  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:54:08.182927  323157 start.go:360] acquireMachinesLock for ha-922218-m04: {Name:mk1c4e4c260415277383e4e2d7891bdf9d980713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:54:08.182984  323157 start.go:364] duration metric: took 40.112µs to acquireMachinesLock for "ha-922218-m04"
	I1120 20:54:08.183005  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:54:08.183013  323157 fix.go:54] fixHost starting: m04
	I1120 20:54:08.183210  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:54:08.201956  323157 fix.go:112] recreateIfNeeded on ha-922218-m04: state=Stopped err=<nil>
	W1120 20:54:08.201985  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:54:08.203921  323157 out.go:252] * Restarting existing docker container for "ha-922218-m04" ...
	I1120 20:54:08.203990  323157 cli_runner.go:164] Run: docker start ha-922218-m04
	I1120 20:54:08.500882  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:54:08.520205  323157 kic.go:430] container "ha-922218-m04" state is running.
	I1120 20:54:08.520698  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:08.539598  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.539924  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:54:08.540000  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:08.558817  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:08.559028  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:08.559039  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:54:08.559647  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56832->127.0.0.1:32823: read: connection reset by peer
	I1120 20:54:11.694470  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m04
	
	I1120 20:54:11.694498  323157 ubuntu.go:182] provisioning hostname "ha-922218-m04"
	I1120 20:54:11.694556  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:11.713721  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:11.714041  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:11.714063  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m04 && echo "ha-922218-m04" | sudo tee /etc/hostname
	I1120 20:54:11.857712  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m04
	
	I1120 20:54:11.857805  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:11.876191  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:11.876435  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:11.876453  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:54:12.008064  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:54:12.008105  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:54:12.008131  323157 ubuntu.go:190] setting up certificates
	I1120 20:54:12.008149  323157 provision.go:84] configureAuth start
	I1120 20:54:12.008245  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:12.026345  323157 provision.go:143] copyHostCerts
	I1120 20:54:12.026390  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:12.026424  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:54:12.026431  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:12.026501  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:54:12.026600  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:12.026623  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:54:12.026630  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:12.026671  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:54:12.026742  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:12.026767  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:54:12.026776  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:12.026803  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:54:12.026878  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m04 san=[127.0.0.1 192.168.49.5 ha-922218-m04 localhost minikube]
	I1120 20:54:12.101540  323157 provision.go:177] copyRemoteCerts
	I1120 20:54:12.101615  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:54:12.101661  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.120979  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.218812  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:54:12.218866  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:54:12.237906  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:54:12.237973  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:54:12.256242  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:54:12.256298  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:54:12.274435  323157 provision.go:87] duration metric: took 266.26509ms to configureAuth
	I1120 20:54:12.274472  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:54:12.274774  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:12.274937  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.294444  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:12.294713  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:12.294742  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:54:12.585665  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:54:12.585696  323157 machine.go:97] duration metric: took 4.045752536s to provisionDockerMachine
	I1120 20:54:12.585712  323157 start.go:293] postStartSetup for "ha-922218-m04" (driver="docker")
	I1120 20:54:12.585734  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:54:12.585814  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:54:12.585872  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.604768  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.701189  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:54:12.705103  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:54:12.705131  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:54:12.705142  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:54:12.705203  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:54:12.705316  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:54:12.705328  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:54:12.705436  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:54:12.713808  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:12.733683  323157 start.go:296] duration metric: took 147.949948ms for postStartSetup
	I1120 20:54:12.733781  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:54:12.733836  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.752642  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.846722  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:54:12.851576  323157 fix.go:56] duration metric: took 4.668555957s for fixHost
	I1120 20:54:12.851609  323157 start.go:83] releasing machines lock for "ha-922218-m04", held for 4.668610463s
	I1120 20:54:12.851688  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:12.872067  323157 out.go:179] * Found network options:
	I1120 20:54:12.873523  323157 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1120 20:54:12.874579  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874614  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874623  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874645  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874656  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874666  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:54:12.874743  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:54:12.874790  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.874801  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:54:12.874864  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.894599  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.894599  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:13.046495  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:54:13.051522  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:54:13.051600  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:54:13.060371  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:54:13.060402  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:54:13.060441  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:54:13.060496  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:54:13.075603  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:54:13.089123  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:54:13.089184  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:54:13.104495  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:54:13.117935  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:54:13.204636  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:54:13.289453  323157 docker.go:234] disabling docker service ...
	I1120 20:54:13.289527  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:54:13.304738  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:54:13.317782  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:54:13.405405  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:54:13.491709  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:54:13.504420  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:54:13.519371  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:54:13.519439  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.528469  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:54:13.528520  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.537935  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.546887  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.555908  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:54:13.564139  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.573055  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.581595  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.590695  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:54:13.597950  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:54:13.605162  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:13.690911  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:54:13.836871  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:54:13.836951  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:54:13.841421  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:54:13.841486  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:54:13.846169  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:54:13.871670  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:54:13.871776  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:13.899765  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:13.930597  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:54:13.931748  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:54:13.932757  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 20:54:13.933675  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1120 20:54:13.934705  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:54:13.952693  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:54:13.957363  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:13.968716  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:54:13.969001  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:13.969254  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:54:13.988111  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:54:13.988373  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.5
	I1120 20:54:13.988385  323157 certs.go:195] generating shared ca certs ...
	I1120 20:54:13.988399  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:54:13.988540  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:54:13.988575  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:54:13.988589  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:54:13.988603  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:54:13.988615  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:54:13.988628  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:54:13.988691  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:54:13.988719  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:54:13.988729  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:54:13.988750  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:54:13.988771  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:54:13.988792  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:54:13.988827  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:13.988853  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:13.988866  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:54:13.988881  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:54:13.988902  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:54:14.007643  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:54:14.026465  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:54:14.045259  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:54:14.064924  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:54:14.083817  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:54:14.101377  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:54:14.119564  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:54:14.126329  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.134374  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:54:14.142273  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.146139  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.146194  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.182277  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:54:14.190606  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.198830  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:54:14.206817  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.210855  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.210906  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.245946  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:54:14.254083  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.261737  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:54:14.269638  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.273524  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.273580  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.308064  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:54:14.316236  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:54:14.320194  323157 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:54:14.320268  323157 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 20:54:14.320379  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:54:14.320454  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:54:14.328815  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:54:14.328872  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 20:54:14.336516  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:54:14.349467  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:54:14.362001  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:54:14.365657  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:14.375549  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:14.458116  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:14.472066  323157 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 20:54:14.472382  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:14.474034  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:54:14.474976  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:14.559289  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:14.572777  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:54:14.572849  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:54:14.573080  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m04" to be "Ready" ...
	W1120 20:54:16.576678  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	W1120 20:54:19.076525  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	W1120 20:54:21.078346  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	I1120 20:54:22.076345  323157 node_ready.go:49] node "ha-922218-m04" is "Ready"
	I1120 20:54:22.076377  323157 node_ready.go:38] duration metric: took 7.503280123s for node "ha-922218-m04" to be "Ready" ...
	I1120 20:54:22.076397  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:22.076458  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:22.089909  323157 system_svc.go:56] duration metric: took 13.491851ms WaitForService to wait for kubelet
	I1120 20:54:22.089941  323157 kubeadm.go:587] duration metric: took 7.617823089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:22.089966  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:22.093121  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093142  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093154  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093158  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093161  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093165  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093170  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093175  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093180  323157 node_conditions.go:105] duration metric: took 3.207725ms to run NodePressure ...
	I1120 20:54:22.093197  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:22.093255  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:22.093568  323157 ssh_runner.go:195] Run: rm -f paused
	I1120 20:54:22.097398  323157 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:54:22.097827  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 20:54:22.109570  323157 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2msz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.119449  323157 pod_ready.go:94] pod "coredns-66bc5c9577-2msz7" is "Ready"
	I1120 20:54:22.119483  323157 pod_ready.go:86] duration metric: took 9.881192ms for pod "coredns-66bc5c9577-2msz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.119494  323157 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kd4l6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.131629  323157 pod_ready.go:94] pod "coredns-66bc5c9577-kd4l6" is "Ready"
	I1120 20:54:22.131656  323157 pod_ready.go:86] duration metric: took 12.154214ms for pod "coredns-66bc5c9577-kd4l6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.134158  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.138697  323157 pod_ready.go:94] pod "etcd-ha-922218" is "Ready"
	I1120 20:54:22.138722  323157 pod_ready.go:86] duration metric: took 4.537439ms for pod "etcd-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.138729  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.142874  323157 pod_ready.go:94] pod "etcd-ha-922218-m02" is "Ready"
	I1120 20:54:22.142900  323157 pod_ready.go:86] duration metric: took 4.166255ms for pod "etcd-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.142909  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.298304  323157 request.go:683] "Waited before sending request" delay="155.234553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-922218-m03"
	I1120 20:54:22.498845  323157 request.go:683] "Waited before sending request" delay="197.338738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:22.501969  323157 pod_ready.go:94] pod "etcd-ha-922218-m03" is "Ready"
	I1120 20:54:22.502000  323157 pod_ready.go:86] duration metric: took 359.082878ms for pod "etcd-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.698517  323157 request.go:683] "Waited before sending request" delay="196.343264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1120 20:54:22.702321  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.898835  323157 request.go:683] "Waited before sending request" delay="196.37899ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218"
	I1120 20:54:23.098414  323157 request.go:683] "Waited before sending request" delay="196.292789ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218"
	I1120 20:54:23.101586  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218" is "Ready"
	I1120 20:54:23.101613  323157 pod_ready.go:86] duration metric: took 399.267945ms for pod "kube-apiserver-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.101634  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.299099  323157 request.go:683] "Waited before sending request" delay="197.354769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218-m02"
	I1120 20:54:23.498968  323157 request.go:683] "Waited before sending request" delay="196.361911ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:23.502012  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218-m02" is "Ready"
	I1120 20:54:23.502037  323157 pod_ready.go:86] duration metric: took 400.398297ms for pod "kube-apiserver-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.502045  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.698407  323157 request.go:683] "Waited before sending request" delay="196.284088ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218-m03"
	I1120 20:54:23.899090  323157 request.go:683] "Waited before sending request" delay="197.347334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:23.902334  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218-m03" is "Ready"
	I1120 20:54:23.902359  323157 pod_ready.go:86] duration metric: took 400.308088ms for pod "kube-apiserver-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.098830  323157 request.go:683] "Waited before sending request" delay="196.34133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 20:54:24.102694  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.299178  323157 request.go:683] "Waited before sending request" delay="196.360417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218"
	I1120 20:54:24.499104  323157 request.go:683] "Waited before sending request" delay="196.347724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218"
	I1120 20:54:24.502309  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218" is "Ready"
	I1120 20:54:24.502336  323157 pod_ready.go:86] duration metric: took 399.617093ms for pod "kube-controller-manager-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.502348  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.698782  323157 request.go:683] "Waited before sending request" delay="196.335349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218-m02"
	I1120 20:54:24.898597  323157 request.go:683] "Waited before sending request" delay="196.345917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:24.901960  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218-m02" is "Ready"
	I1120 20:54:24.901992  323157 pod_ready.go:86] duration metric: took 399.637685ms for pod "kube-controller-manager-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.902001  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.098365  323157 request.go:683] "Waited before sending request" delay="196.278218ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218-m03"
	I1120 20:54:25.299280  323157 request.go:683] "Waited before sending request" delay="197.379888ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:25.302430  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218-m03" is "Ready"
	I1120 20:54:25.302455  323157 pod_ready.go:86] duration metric: took 400.448425ms for pod "kube-controller-manager-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.498873  323157 request.go:683] "Waited before sending request" delay="196.293203ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 20:54:25.502860  323157 pod_ready.go:83] waiting for pod "kube-proxy-4cpch" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.698288  323157 request.go:683] "Waited before sending request" delay="195.281134ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cpch"
	I1120 20:54:25.898934  323157 request.go:683] "Waited before sending request" delay="197.356231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:25.902128  323157 pod_ready.go:94] pod "kube-proxy-4cpch" is "Ready"
	I1120 20:54:25.902154  323157 pod_ready.go:86] duration metric: took 399.270347ms for pod "kube-proxy-4cpch" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.902162  323157 pod_ready.go:83] waiting for pod "kube-proxy-hjm8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:26.098606  323157 request.go:683] "Waited before sending request" delay="196.346655ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjm8j"
	I1120 20:54:26.299163  323157 request.go:683] "Waited before sending request" delay="197.345494ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:26.498649  323157 request.go:683] "Waited before sending request" delay="96.287539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjm8j"
	I1120 20:54:26.699151  323157 request.go:683] "Waited before sending request" delay="197.392783ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:27.098399  323157 request.go:683] "Waited before sending request" delay="192.27694ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:27.498455  323157 request.go:683] "Waited before sending request" delay="92.237627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	W1120 20:54:27.908326  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:29.909034  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:32.408730  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:34.908689  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:37.408823  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:39.908861  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:42.408698  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:44.908702  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:47.408397  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:49.409469  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:51.908163  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:53.908996  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:56.408061  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:58.408624  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:00.908495  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:02.908955  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:05.408405  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:07.909016  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:10.408037  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:12.408417  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:14.908340  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:17.409065  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:19.908332  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:21.908889  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:24.408759  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:26.908929  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:29.408434  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:31.408588  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:33.409210  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:35.908636  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:37.909250  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:40.410051  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:42.909105  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:45.408430  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:47.408740  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:49.908450  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:52.409005  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:54.907859  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:56.908189  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:58.908541  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:00.909373  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:03.408429  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:05.408564  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:07.908140  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:09.908306  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:11.908938  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:14.408871  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:16.907877  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:18.907974  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:20.908614  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:23.408900  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:25.908472  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:28.408373  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:30.408570  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:32.408832  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:34.909276  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:37.408137  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:39.409076  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:41.409464  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:43.908812  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:46.408702  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:48.908615  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:51.408026  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:53.408283  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:55.408942  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:57.909263  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:00.408692  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:02.409101  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:04.907598  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:06.908152  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:08.909063  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:11.408240  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:13.908776  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:16.408622  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:18.908974  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:21.409451  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:23.908489  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:25.908547  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:27.909262  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:30.408274  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:32.409046  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:34.908267  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:37.408193  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:39.408371  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:41.908734  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:43.909408  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:46.408806  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:48.908938  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:51.408993  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:53.908365  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:55.908521  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:57.918887  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:00.408852  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:02.410752  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:04.909111  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:07.409095  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:09.908605  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:12.409207  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:14.409540  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:16.908532  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:18.909206  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:21.408380  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	I1120 20:58:22.098421  323157 pod_ready.go:86] duration metric: took 3m56.196242024s for pod "kube-proxy-hjm8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 20:58:22.098463  323157 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 20:58:22.098478  323157 pod_ready.go:40] duration metric: took 4m0.001055692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:58:22.100129  323157 out.go:203] 
	W1120 20:58:22.101328  323157 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 20:58:22.102425  323157 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-922218 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-922218
helpers_test.go:243: (dbg) docker inspect ha-922218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2",
	        "Created": "2025-11-20T20:48:08.305484419Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323354,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:52:05.575784584Z",
	            "FinishedAt": "2025-11-20T20:52:04.865809974Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/hosts",
	        "LogPath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2-json.log",
	        "Name": "/ha-922218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-922218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-922218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2",
	                "LowerDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-922218",
	                "Source": "/var/lib/docker/volumes/ha-922218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-922218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-922218",
	                "name.minikube.sigs.k8s.io": "ha-922218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "66dd206f11a0a0a6ef63e1aaf681e70420082aaa4fdf320b0caa28316d460919",
	            "SandboxKey": "/var/run/docker/netns/66dd206f11a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-922218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "acedad58d8d6133060c432e76b858ca8895634a834fb6c75b12b58c6c2b70de4",
	                    "EndpointID": "5cd5b0e30b914b09cb85aa3289ff87d176f3621330c5c3cc1edd6559a4bda334",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ba:7d:28:02:80:9c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-922218",
	                        "f4fe7dc2831e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-922218 -n ha-922218
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 logs -n 25: (1.102102941s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt ha-922218-m02:/home/docker/cp-test_ha-922218-m03_ha-922218-m02.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m02 sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218-m02.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt ha-922218-m04:/home/docker/cp-test_ha-922218-m03_ha-922218-m04.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218-m04.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp testdata/cp-test.txt ha-922218-m04:/home/docker/cp-test.txt                                                             │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1859247620/001/cp-test_ha-922218-m04.txt │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218:/home/docker/cp-test_ha-922218-m04_ha-922218.txt                       │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218 sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218.txt                                                 │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218-m02:/home/docker/cp-test_ha-922218-m04_ha-922218-m02.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m02 sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218-m02.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218-m03:/home/docker/cp-test_ha-922218-m04_ha-922218-m03.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m03 sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218-m03.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ node    │ ha-922218 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ node    │ ha-922218 node start m02 --alsologtostderr -v 5                                                                                      │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ node    │ ha-922218 node list --alsologtostderr -v 5                                                                                           │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ stop    │ ha-922218 stop --alsologtostderr -v 5                                                                                                │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ start   │ ha-922218 start --wait true --alsologtostderr -v 5                                                                                   │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │                     │
	│ node    │ ha-922218 node list --alsologtostderr -v 5                                                                                           │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:52:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:52:05.328764  323157 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:52:05.329077  323157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:05.329088  323157 out.go:374] Setting ErrFile to fd 2...
	I1120 20:52:05.329095  323157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:05.329358  323157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:52:05.329815  323157 out.go:368] Setting JSON to false
	I1120 20:52:05.330759  323157 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12867,"bootTime":1763659058,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:52:05.330873  323157 start.go:143] virtualization: kvm guest
	I1120 20:52:05.332897  323157 out.go:179] * [ha-922218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:52:05.334089  323157 notify.go:221] Checking for updates...
	I1120 20:52:05.334111  323157 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:52:05.335153  323157 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:52:05.336342  323157 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:05.337453  323157 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:52:05.338644  323157 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:52:05.339840  323157 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:52:05.341429  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:05.341547  323157 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:52:05.366166  323157 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:52:05.366337  323157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:05.429868  323157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-20 20:52:05.418170855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:05.429981  323157 docker.go:319] overlay module found
	I1120 20:52:05.432415  323157 out.go:179] * Using the docker driver based on existing profile
	I1120 20:52:05.433478  323157 start.go:309] selected driver: docker
	I1120 20:52:05.433497  323157 start.go:930] validating driver "docker" against &{Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:05.433601  323157 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:52:05.433679  323157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:05.497705  323157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-20 20:52:05.48528978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:05.498702  323157 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:05.498750  323157 cni.go:84] Creating CNI manager for ""
	I1120 20:52:05.498813  323157 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 20:52:05.498895  323157 start.go:353] cluster config:
	{Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:05.501099  323157 out.go:179] * Starting "ha-922218" primary control-plane node in "ha-922218" cluster
	I1120 20:52:05.502199  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:52:05.503398  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:05.504658  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:05.504699  323157 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:52:05.504719  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:52:05.504760  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:05.504824  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:52:05.504840  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:52:05.505023  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:05.527904  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:05.527929  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:05.527945  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:05.527985  323157 start.go:360] acquireMachinesLock for ha-922218: {Name:mk7973b5b3e2bce97a45ae60ce14811fb93a6808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:05.528045  323157 start.go:364] duration metric: took 37.272µs to acquireMachinesLock for "ha-922218"
	I1120 20:52:05.528067  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:52:05.528078  323157 fix.go:54] fixHost starting: 
	I1120 20:52:05.528385  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:52:05.546149  323157 fix.go:112] recreateIfNeeded on ha-922218: state=Stopped err=<nil>
	W1120 20:52:05.546186  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:52:05.548148  323157 out.go:252] * Restarting existing docker container for "ha-922218" ...
	I1120 20:52:05.548228  323157 cli_runner.go:164] Run: docker start ha-922218
	I1120 20:52:05.829297  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:52:05.854267  323157 kic.go:430] container "ha-922218" state is running.
	I1120 20:52:05.854754  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:05.879797  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:05.880184  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:05.880316  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:05.902671  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:05.902972  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:05.902987  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:05.903785  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36002->127.0.0.1:32808: read: connection reset by peer
	I1120 20:52:09.038413  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218
	
	I1120 20:52:09.038466  323157 ubuntu.go:182] provisioning hostname "ha-922218"
	I1120 20:52:09.038538  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:09.056776  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:09.057040  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:09.057057  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218 && echo "ha-922218" | sudo tee /etc/hostname
	I1120 20:52:09.198987  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218
	
	I1120 20:52:09.199094  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:09.218187  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:09.218484  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:09.218518  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:09.350283  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:09.350320  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:52:09.350371  323157 ubuntu.go:190] setting up certificates
	I1120 20:52:09.350386  323157 provision.go:84] configureAuth start
	I1120 20:52:09.350452  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:09.368706  323157 provision.go:143] copyHostCerts
	I1120 20:52:09.368743  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:09.368777  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:52:09.368790  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:09.368861  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:52:09.368944  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:09.368963  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:52:09.368970  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:09.368996  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:52:09.369044  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:09.369060  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:52:09.369066  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:09.369089  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:52:09.369139  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218 san=[127.0.0.1 192.168.49.2 ha-922218 localhost minikube]
	I1120 20:52:10.061446  323157 provision.go:177] copyRemoteCerts
	I1120 20:52:10.061522  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:10.061563  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.080281  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.175628  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:52:10.175687  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1120 20:52:10.193744  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:52:10.193807  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:52:10.211340  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:52:10.211404  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:10.229048  323157 provision.go:87] duration metric: took 878.645023ms to configureAuth
	I1120 20:52:10.229077  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:10.229298  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:10.229423  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.247922  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.248191  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:10.248210  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:52:10.573365  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:52:10.573392  323157 machine.go:97] duration metric: took 4.693182802s to provisionDockerMachine
	I1120 20:52:10.573407  323157 start.go:293] postStartSetup for "ha-922218" (driver="docker")
	I1120 20:52:10.573426  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:10.573499  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:10.573553  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.593733  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.690092  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:10.693995  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:10.694023  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:10.694034  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:52:10.694094  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:52:10.694185  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:52:10.694199  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:52:10.694322  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:10.702399  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:10.721119  323157 start.go:296] duration metric: took 147.693408ms for postStartSetup
	I1120 20:52:10.721235  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:10.721282  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.739969  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.833630  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:10.838327  323157 fix.go:56] duration metric: took 5.310241763s for fixHost
	I1120 20:52:10.838357  323157 start.go:83] releasing machines lock for "ha-922218", held for 5.310298505s
	I1120 20:52:10.838432  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:10.856719  323157 ssh_runner.go:195] Run: cat /version.json
	I1120 20:52:10.856760  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:10.856779  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.856845  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.876456  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.876715  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:11.025514  323157 ssh_runner.go:195] Run: systemctl --version
	I1120 20:52:11.032462  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:52:11.068010  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:11.072912  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:11.072991  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:11.081063  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:52:11.081087  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:52:11.081118  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:11.081168  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:52:11.095970  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:52:11.108445  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:11.108509  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:11.123137  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:11.135601  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:11.213922  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:11.297509  323157 docker.go:234] disabling docker service ...
	I1120 20:52:11.297579  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:11.312344  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:11.324558  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:11.404570  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:11.482324  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:11.495121  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:11.509896  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:52:11.509955  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.519009  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:52:11.519074  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.528081  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.536889  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.546294  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:11.554800  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.563861  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.572378  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.581389  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:11.589599  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:11.597300  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:11.674297  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:52:11.817850  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:52:11.817928  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:52:11.822052  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:52:11.822102  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:52:11.826068  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:52:11.851404  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:52:11.851494  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:52:11.879770  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:52:11.909889  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:52:11.911081  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:11.928829  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:52:11.933285  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:11.944894  323157 kubeadm.go:884] updating cluster {Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:52:11.945069  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:11.945159  323157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:11.979530  323157 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:52:11.979551  323157 crio.go:433] Images already preloaded, skipping extraction
	I1120 20:52:11.979599  323157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:12.008103  323157 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:52:12.008127  323157 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:52:12.008135  323157 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 20:52:12.008259  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:52:12.008342  323157 ssh_runner.go:195] Run: crio config
	I1120 20:52:12.053953  323157 cni.go:84] Creating CNI manager for ""
	I1120 20:52:12.053974  323157 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 20:52:12.053990  323157 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:52:12.054013  323157 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-922218 NodeName:ha-922218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:52:12.054128  323157 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-922218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:52:12.054146  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:52:12.054186  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:52:12.067315  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:52:12.067457  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:52:12.067537  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:12.075923  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:52:12.076002  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 20:52:12.083739  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 20:52:12.096285  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:52:12.109031  323157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1120 20:52:12.121723  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:52:12.134083  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:52:12.137866  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:12.148115  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:12.228004  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:12.251717  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.2
	I1120 20:52:12.251748  323157 certs.go:195] generating shared ca certs ...
	I1120 20:52:12.251770  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.251938  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:52:12.251981  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:52:12.251992  323157 certs.go:257] generating profile certs ...
	I1120 20:52:12.252071  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:52:12.252098  323157 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09
	I1120 20:52:12.252119  323157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1120 20:52:12.330376  323157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 ...
	I1120 20:52:12.330417  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09: {Name:mk6b74f2e5931344472166b62a32edaf4f45744b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.330619  323157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09 ...
	I1120 20:52:12.330655  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09: {Name:mk229093d7281b814de77a27daa6f3543e470a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.330779  323157 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt
	I1120 20:52:12.330974  323157 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key
	I1120 20:52:12.331167  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:52:12.331190  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:52:12.331230  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:52:12.331254  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:52:12.331277  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:52:12.331295  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:52:12.331313  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:52:12.331331  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:52:12.331349  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:52:12.331428  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:52:12.331475  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:52:12.331490  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:52:12.331519  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:52:12.331552  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:52:12.331587  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:52:12.331662  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:12.331712  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.331735  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.331750  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.332594  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:52:12.353047  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:52:12.370168  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:52:12.387211  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:52:12.405559  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:52:12.422666  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:52:12.441539  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:52:12.460737  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:52:12.479326  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:52:12.497570  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:52:12.515902  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:52:12.534796  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:52:12.548189  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:52:12.554678  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.562462  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:52:12.570059  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.573962  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.574018  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.607754  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:12.615941  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.623665  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:52:12.632109  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.636168  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.636242  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.670187  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:52:12.678284  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.685973  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:52:12.693528  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.697235  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.697293  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.731035  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:52:12.738959  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:52:12.742968  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:52:12.789124  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:52:12.832435  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:52:12.886449  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:52:12.943193  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:52:12.978550  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:52:13.013640  323157 kubeadm.go:401] StartCluster: {Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:13.013797  323157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:52:13.013859  323157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:52:13.049696  323157 cri.go:89] found id: "65d1e3fad6b2daa0d2eb48dff43ccc96c150434dda9afd9eeaf84004fee7ace3"
	I1120 20:52:13.049721  323157 cri.go:89] found id: "8b6d87aa881c9d7ce48cf020cc5a82bcd71165681bd09bdbef589896ef08b244"
	I1120 20:52:13.049727  323157 cri.go:89] found id: "406607e74d1618ca02cbf22003052ea65983c0e1235732ec547478bff625b9ff"
	I1120 20:52:13.049732  323157 cri.go:89] found id: "9e882a89de870c006dd62af4f419f69f18af696b07ee1686b859a279092e03e0"
	I1120 20:52:13.049737  323157 cri.go:89] found id: "45a868d0ee3cc88db4f8ceed46d0f4eddce85b589457dcbb93848dd871b099bf"
	I1120 20:52:13.049741  323157 cri.go:89] found id: ""
	I1120 20:52:13.049788  323157 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 20:52:13.062401  323157 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1120 20:52:13.062470  323157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:52:13.070809  323157 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 20:52:13.070832  323157 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 20:52:13.070881  323157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 20:52:13.078757  323157 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:52:13.079306  323157 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-922218" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:13.079441  323157 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "ha-922218" cluster setting kubeconfig missing "ha-922218" context setting]
	I1120 20:52:13.079865  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.080582  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 20:52:13.081160  323157 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 20:52:13.081177  323157 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 20:52:13.081183  323157 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 20:52:13.081188  323157 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 20:52:13.081196  323157 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 20:52:13.081252  323157 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 20:52:13.081712  323157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 20:52:13.089447  323157 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 20:52:13.089467  323157 kubeadm.go:602] duration metric: took 18.629525ms to restartPrimaryControlPlane
	I1120 20:52:13.089478  323157 kubeadm.go:403] duration metric: took 75.851486ms to StartCluster
	I1120 20:52:13.089496  323157 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.089563  323157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:13.090205  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.090465  323157 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:52:13.090490  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:52:13.090499  323157 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:13.090755  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:13.093327  323157 out.go:179] * Enabled addons: 
	I1120 20:52:13.094381  323157 addons.go:515] duration metric: took 3.879805ms for enable addons: enabled=[]
	I1120 20:52:13.094412  323157 start.go:247] waiting for cluster config update ...
	I1120 20:52:13.094424  323157 start.go:256] writing updated cluster config ...
	I1120 20:52:13.095823  323157 out.go:203] 
	I1120 20:52:13.097078  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:13.097195  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.098642  323157 out.go:179] * Starting "ha-922218-m02" control-plane node in "ha-922218" cluster
	I1120 20:52:13.099780  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:52:13.101045  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:13.102201  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:13.102233  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:52:13.102244  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:13.102316  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:52:13.102330  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:52:13.102446  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.124350  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:13.124372  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:13.124388  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:13.124422  323157 start.go:360] acquireMachinesLock for ha-922218-m02: {Name:mk327cff0c42e8fe5ded9f6386acc07315d39a09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:13.124488  323157 start.go:364] duration metric: took 45.103µs to acquireMachinesLock for "ha-922218-m02"
	I1120 20:52:13.124508  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:52:13.124518  323157 fix.go:54] fixHost starting: m02
	I1120 20:52:13.124771  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:52:13.143934  323157 fix.go:112] recreateIfNeeded on ha-922218-m02: state=Stopped err=<nil>
	W1120 20:52:13.143964  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:52:13.149354  323157 out.go:252] * Restarting existing docker container for "ha-922218-m02" ...
	I1120 20:52:13.149455  323157 cli_runner.go:164] Run: docker start ha-922218-m02
	I1120 20:52:13.461778  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:52:13.484306  323157 kic.go:430] container "ha-922218-m02" state is running.
	I1120 20:52:13.484763  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:13.505868  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.506112  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:13.506167  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:13.526643  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:13.526854  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:13.526866  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:13.527491  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45258->127.0.0.1:32813: read: connection reset by peer
	I1120 20:52:16.660479  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m02
	
	I1120 20:52:16.660511  323157 ubuntu.go:182] provisioning hostname "ha-922218-m02"
	I1120 20:52:16.660584  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:16.679969  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:16.680183  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:16.680195  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m02 && echo "ha-922218-m02" | sudo tee /etc/hostname
	I1120 20:52:16.821890  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m02
	
	I1120 20:52:16.821965  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:16.839799  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:16.840017  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:16.840033  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:16.971112  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:16.971145  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:52:16.971166  323157 ubuntu.go:190] setting up certificates
	I1120 20:52:16.971179  323157 provision.go:84] configureAuth start
	I1120 20:52:16.971279  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:16.989488  323157 provision.go:143] copyHostCerts
	I1120 20:52:16.989529  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:16.989560  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:52:16.989569  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:16.989635  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:52:16.989719  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:16.989738  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:52:16.989744  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:16.989770  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:52:16.989870  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:16.989892  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:52:16.989898  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:16.989924  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:52:16.989977  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m02 san=[127.0.0.1 192.168.49.3 ha-922218-m02 localhost minikube]
	I1120 20:52:18.325243  323157 provision.go:177] copyRemoteCerts
	I1120 20:52:18.325315  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:18.325359  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.349476  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:18.454303  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:52:18.454394  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:52:18.479542  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:52:18.479667  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:52:18.500104  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:52:18.500180  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:18.518171  323157 provision.go:87] duration metric: took 1.546978244s to configureAuth
	I1120 20:52:18.518200  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:18.518425  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:18.518527  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.537190  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:18.537424  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:18.537440  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:52:18.895794  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:52:18.895824  323157 machine.go:97] duration metric: took 5.389701302s to provisionDockerMachine
	I1120 20:52:18.895839  323157 start.go:293] postStartSetup for "ha-922218-m02" (driver="docker")
	I1120 20:52:18.895853  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:18.895988  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:18.896049  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.917397  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.017957  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:19.023501  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:19.023526  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:19.023536  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:52:19.023581  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:52:19.023657  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:52:19.023667  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:52:19.023756  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:19.033501  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:19.054171  323157 start.go:296] duration metric: took 158.315421ms for postStartSetup
	I1120 20:52:19.054290  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:19.054332  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.076545  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.179900  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:19.186175  323157 fix.go:56] duration metric: took 6.061648548s for fixHost
	I1120 20:52:19.186235  323157 start.go:83] releasing machines lock for "ha-922218-m02", held for 6.061714164s
	I1120 20:52:19.186321  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:19.212036  323157 out.go:179] * Found network options:
	I1120 20:52:19.213348  323157 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 20:52:19.214893  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:52:19.214943  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:52:19.215032  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:52:19.215091  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.215108  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:19.215187  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.241015  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.241538  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.437902  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:19.444586  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:19.444668  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:19.455464  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:52:19.455492  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:52:19.455532  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:19.455584  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:52:19.479915  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:52:19.496789  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:19.496839  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:19.512753  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:19.525991  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:19.636269  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:19.743860  323157 docker.go:234] disabling docker service ...
	I1120 20:52:19.743937  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:19.758942  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:19.771625  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:19.879756  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:19.984607  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:19.997908  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:20.012508  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:52:20.012564  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.021752  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:52:20.021808  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.031377  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.041137  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.050156  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:20.058809  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.068260  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.078190  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.087650  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:20.095104  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:20.102596  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:20.245093  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:53:50.500050  323157 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.254907746s)
	I1120 20:53:50.500099  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:53:50.500170  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:53:50.504526  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:53:50.504579  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:53:50.508365  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:53:50.534774  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:53:50.534864  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:53:50.562018  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:53:50.592115  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:53:50.593411  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:53:50.594685  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:53:50.612868  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:53:50.617151  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:53:50.628089  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:53:50.628365  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:53:50.628586  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:53:50.646653  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:53:50.646897  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.3
	I1120 20:53:50.646915  323157 certs.go:195] generating shared ca certs ...
	I1120 20:53:50.646931  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:53:50.647073  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:53:50.647108  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:53:50.647117  323157 certs.go:257] generating profile certs ...
	I1120 20:53:50.647209  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:53:50.647303  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c836c87f
	I1120 20:53:50.647340  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:53:50.647354  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:53:50.647371  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:53:50.647384  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:53:50.647397  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:53:50.647409  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:53:50.647421  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:53:50.647433  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:53:50.647458  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:53:50.647511  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:53:50.647546  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:53:50.647555  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:53:50.647579  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:53:50.647605  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:53:50.647625  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:53:50.647667  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:53:50.647693  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:53:50.647706  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:50.647719  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:53:50.647768  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:53:50.665659  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:53:50.755584  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 20:53:50.760041  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 20:53:50.768729  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 20:53:50.772558  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 20:53:50.781784  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 20:53:50.785575  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 20:53:50.794334  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 20:53:50.798078  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 20:53:50.807321  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 20:53:50.811305  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 20:53:50.819736  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 20:53:50.823350  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 20:53:50.831741  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:53:50.849848  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:53:50.867486  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:53:50.884818  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:53:50.902061  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:53:50.919790  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:53:50.937569  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:53:50.955443  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:53:50.972778  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:53:50.990638  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:53:51.008199  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:53:51.026275  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 20:53:51.039905  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 20:53:51.054001  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 20:53:51.068159  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 20:53:51.083445  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 20:53:51.096696  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 20:53:51.109424  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 20:53:51.122677  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:53:51.129308  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.137038  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:53:51.144950  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.148713  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.148764  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.183638  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:53:51.192271  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.199701  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:53:51.207336  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.211049  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.211109  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.247556  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:53:51.255756  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.263373  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:53:51.270762  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.274831  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.274886  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.310488  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:53:51.318664  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:53:51.322469  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:53:51.356447  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:53:51.390490  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:53:51.424733  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:53:51.459076  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:53:51.492960  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:53:51.527319  323157 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 20:53:51.527454  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:53:51.527485  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:53:51.527542  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:53:51.541450  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:53:51.541513  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:53:51.541572  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:53:51.549762  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:53:51.549835  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 20:53:51.558197  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:53:51.572021  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:53:51.585070  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:53:51.597674  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:53:51.601380  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:53:51.611235  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:53:51.721067  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:53:51.734155  323157 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:53:51.734528  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:53:51.736279  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:53:51.737724  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:53:51.846124  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:53:51.859674  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:53:51.859761  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:53:51.860000  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m02" to be "Ready" ...
	W1120 20:53:53.863125  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	W1120 20:53:55.863446  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	W1120 20:53:57.863942  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	I1120 20:54:00.364328  323157 node_ready.go:49] node "ha-922218-m02" is "Ready"
	I1120 20:54:00.364359  323157 node_ready.go:38] duration metric: took 8.504330619s for node "ha-922218-m02" to be "Ready" ...
	I1120 20:54:00.364381  323157 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:54:00.364433  323157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:54:00.376821  323157 api_server.go:72] duration metric: took 8.642616301s to wait for apiserver process to appear ...
	I1120 20:54:00.376853  323157 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:54:00.376887  323157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:54:00.381080  323157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:54:00.382023  323157 api_server.go:141] control plane version: v1.34.1
	I1120 20:54:00.382047  323157 api_server.go:131] duration metric: took 5.187881ms to wait for apiserver health ...
	I1120 20:54:00.382059  323157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:54:00.388374  323157 system_pods.go:59] 26 kube-system pods found
	I1120 20:54:00.388402  323157 system_pods.go:61] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:00.388407  323157 system_pods.go:61] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:00.388410  323157 system_pods.go:61] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:00.388414  323157 system_pods.go:61] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:00.388417  323157 system_pods.go:61] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running
	I1120 20:54:00.388422  323157 system_pods.go:61] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:00.388425  323157 system_pods.go:61] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:00.388428  323157 system_pods.go:61] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:00.388435  323157 system_pods.go:61] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:00.388440  323157 system_pods.go:61] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:00.388445  323157 system_pods.go:61] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:00.388448  323157 system_pods.go:61] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running
	I1120 20:54:00.388453  323157 system_pods.go:61] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:00.388461  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:00.388465  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running
	I1120 20:54:00.388468  323157 system_pods.go:61] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:00.388473  323157 system_pods.go:61] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:00.388479  323157 system_pods.go:61] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:00.388482  323157 system_pods.go:61] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:00.388485  323157 system_pods.go:61] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:00.388491  323157 system_pods.go:61] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:00.388494  323157 system_pods.go:61] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running
	I1120 20:54:00.388496  323157 system_pods.go:61] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:00.388499  323157 system_pods.go:61] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:00.388502  323157 system_pods.go:61] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:00.388505  323157 system_pods.go:61] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:00.388510  323157 system_pods.go:74] duration metric: took 6.446272ms to wait for pod list to return data ...
	I1120 20:54:00.388517  323157 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:54:00.391628  323157 default_sa.go:45] found service account: "default"
	I1120 20:54:00.391650  323157 default_sa.go:55] duration metric: took 3.127505ms for default service account to be created ...
	I1120 20:54:00.391659  323157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:54:00.397448  323157 system_pods.go:86] 26 kube-system pods found
	I1120 20:54:00.397474  323157 system_pods.go:89] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:00.397480  323157 system_pods.go:89] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:00.397484  323157 system_pods.go:89] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:00.397487  323157 system_pods.go:89] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:00.397491  323157 system_pods.go:89] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running
	I1120 20:54:00.397495  323157 system_pods.go:89] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:00.397498  323157 system_pods.go:89] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:00.397501  323157 system_pods.go:89] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:00.397507  323157 system_pods.go:89] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:00.397515  323157 system_pods.go:89] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:00.397519  323157 system_pods.go:89] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:00.397523  323157 system_pods.go:89] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running
	I1120 20:54:00.397528  323157 system_pods.go:89] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:00.397534  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:00.397537  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running
	I1120 20:54:00.397542  323157 system_pods.go:89] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:00.397546  323157 system_pods.go:89] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:00.397550  323157 system_pods.go:89] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:00.397553  323157 system_pods.go:89] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:00.397556  323157 system_pods.go:89] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:00.397559  323157 system_pods.go:89] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:00.397564  323157 system_pods.go:89] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running
	I1120 20:54:00.397567  323157 system_pods.go:89] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:00.397569  323157 system_pods.go:89] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:00.397574  323157 system_pods.go:89] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:00.397577  323157 system_pods.go:89] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:00.397584  323157 system_pods.go:126] duration metric: took 5.920412ms to wait for k8s-apps to be running ...
	I1120 20:54:00.397590  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:00.397634  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:00.411201  323157 system_svc.go:56] duration metric: took 13.597746ms WaitForService to wait for kubelet
	I1120 20:54:00.411248  323157 kubeadm.go:587] duration metric: took 8.677048036s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:00.411276  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:00.415079  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415110  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415124  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415127  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415131  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415134  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415137  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415140  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415143  323157 node_conditions.go:105] duration metric: took 3.862735ms to run NodePressure ...
	I1120 20:54:00.415156  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:00.415179  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:00.416940  323157 out.go:203] 
	I1120 20:54:00.418262  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:00.418361  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.420050  323157 out.go:179] * Starting "ha-922218-m03" control-plane node in "ha-922218" cluster
	I1120 20:54:00.421459  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:54:00.422633  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:54:00.423753  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:54:00.423776  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:54:00.423854  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:54:00.423922  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:54:00.423940  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:54:00.424083  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.445274  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:54:00.445296  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:54:00.445313  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:54:00.445346  323157 start.go:360] acquireMachinesLock for ha-922218-m03: {Name:mk2f097c0ed961dc411b64ff8718e82c63bed499 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:54:00.445404  323157 start.go:364] duration metric: took 37.644µs to acquireMachinesLock for "ha-922218-m03"
	I1120 20:54:00.445429  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:54:00.445440  323157 fix.go:54] fixHost starting: m03
	I1120 20:54:00.445721  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:54:00.464059  323157 fix.go:112] recreateIfNeeded on ha-922218-m03: state=Stopped err=<nil>
	W1120 20:54:00.464096  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:54:00.465782  323157 out.go:252] * Restarting existing docker container for "ha-922218-m03" ...
	I1120 20:54:00.465877  323157 cli_runner.go:164] Run: docker start ha-922218-m03
	I1120 20:54:00.752312  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:54:00.772989  323157 kic.go:430] container "ha-922218-m03" state is running.
	I1120 20:54:00.773519  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:00.792599  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.792864  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:54:00.792955  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:00.811862  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:00.812107  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:00.812122  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:54:00.812859  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51650->127.0.0.1:32818: read: connection reset by peer
	I1120 20:54:03.944569  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m03
	
	I1120 20:54:03.944604  323157 ubuntu.go:182] provisioning hostname "ha-922218-m03"
	I1120 20:54:03.944668  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:03.962694  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:03.962979  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:03.963001  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m03 && echo "ha-922218-m03" | sudo tee /etc/hostname
	I1120 20:54:04.105497  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m03
	
	I1120 20:54:04.105607  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.123058  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:04.123306  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:04.123324  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:54:04.258245  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:54:04.258278  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:54:04.258296  323157 ubuntu.go:190] setting up certificates
	I1120 20:54:04.258308  323157 provision.go:84] configureAuth start
	I1120 20:54:04.258362  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:04.279610  323157 provision.go:143] copyHostCerts
	I1120 20:54:04.279658  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:04.279700  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:54:04.279713  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:04.279830  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:54:04.279954  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:04.279983  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:54:04.279994  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:04.280037  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:54:04.280114  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:04.280137  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:54:04.280143  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:04.280182  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:54:04.280275  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m03 san=[127.0.0.1 192.168.49.4 ha-922218-m03 localhost minikube]
	I1120 20:54:04.594873  323157 provision.go:177] copyRemoteCerts
	I1120 20:54:04.594949  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:54:04.595006  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.620652  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:04.724930  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:54:04.724996  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:54:04.744735  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:54:04.744808  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:54:04.767156  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:54:04.767237  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:54:04.786208  323157 provision.go:87] duration metric: took 527.885771ms to configureAuth
	I1120 20:54:04.786260  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:54:04.786486  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:04.786596  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.804998  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:04.805211  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:04.805245  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:54:05.142154  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:54:05.142184  323157 machine.go:97] duration metric: took 4.349303942s to provisionDockerMachine
	I1120 20:54:05.142196  323157 start.go:293] postStartSetup for "ha-922218-m03" (driver="docker")
	I1120 20:54:05.142207  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:54:05.142302  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:54:05.142352  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.161336  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.258512  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:54:05.262505  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:54:05.262541  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:54:05.262557  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:54:05.262619  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:54:05.262714  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:54:05.262726  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:54:05.262809  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:54:05.270992  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:05.290254  323157 start.go:296] duration metric: took 148.013138ms for postStartSetup
	I1120 20:54:05.290349  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:54:05.290395  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.312238  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.418404  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:54:05.424662  323157 fix.go:56] duration metric: took 4.979214262s for fixHost
	I1120 20:54:05.424693  323157 start.go:83] releasing machines lock for "ha-922218-m03", held for 4.979275228s
	I1120 20:54:05.424774  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:05.448969  323157 out.go:179] * Found network options:
	I1120 20:54:05.450451  323157 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 20:54:05.453201  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453264  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453295  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453313  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:54:05.453406  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:54:05.453469  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.453486  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:54:05.453555  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.475420  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.475725  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.630989  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:54:05.636113  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:54:05.636175  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:54:05.644977  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:54:05.645012  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:54:05.645047  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:54:05.645097  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:54:05.661262  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:54:05.674425  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:54:05.674494  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:54:05.689725  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:54:05.702759  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:54:05.825858  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:54:05.942569  323157 docker.go:234] disabling docker service ...
	I1120 20:54:05.942658  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:54:05.958482  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:54:05.972123  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:54:06.094822  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:54:06.215707  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:54:06.229448  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:54:06.245084  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:54:06.245154  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.254965  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:54:06.255020  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.265259  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.275476  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.285519  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:54:06.294777  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.304916  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.313870  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.322957  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:54:06.330497  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:54:06.338069  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:06.450575  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:54:06.648124  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:54:06.648243  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:54:06.653061  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:54:06.653129  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:54:06.657494  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:54:06.699746  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:54:06.699846  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:06.736255  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:06.768946  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:54:06.770257  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:54:06.771411  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 20:54:06.772594  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:54:06.792494  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:54:06.797451  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:06.810322  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:54:06.810733  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:06.811056  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:54:06.832939  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:54:06.833235  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.4
	I1120 20:54:06.833251  323157 certs.go:195] generating shared ca certs ...
	I1120 20:54:06.833270  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:54:06.833418  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:54:06.833458  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:54:06.833467  323157 certs.go:257] generating profile certs ...
	I1120 20:54:06.833538  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:54:06.833595  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.8321a6cf
	I1120 20:54:06.833629  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:54:06.833641  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:54:06.833655  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:54:06.833667  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:54:06.833679  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:54:06.833691  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:54:06.833704  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:54:06.833716  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:54:06.833730  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:54:06.833780  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:54:06.833808  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:54:06.833818  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:54:06.833838  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:54:06.833859  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:54:06.833880  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:54:06.833917  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:06.833947  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:54:06.833959  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:54:06.833973  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:06.834021  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:54:06.855612  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:54:06.947569  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 20:54:06.951943  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 20:54:06.960328  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 20:54:06.963907  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 20:54:06.972305  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 20:54:06.975879  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 20:54:06.984275  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 20:54:06.987841  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 20:54:06.995987  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 20:54:06.999744  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 20:54:07.008281  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 20:54:07.011963  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 20:54:07.020131  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:54:07.038787  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:54:07.058870  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:54:07.076347  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:54:07.093829  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:54:07.111361  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:54:07.133151  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:54:07.155916  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:54:07.176755  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:54:07.200109  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:54:07.222203  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:54:07.243966  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 20:54:07.260671  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 20:54:07.277366  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 20:54:07.293185  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 20:54:07.309452  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 20:54:07.324432  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 20:54:07.339188  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 20:54:07.353766  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:54:07.359885  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.367247  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:54:07.374693  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.378281  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.378337  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.415439  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:54:07.423662  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.431392  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:54:07.439351  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.442939  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.442985  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.477391  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:54:07.485472  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.493309  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:54:07.500900  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.504615  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.504678  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.540459  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:54:07.548510  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:54:07.552608  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:54:07.587157  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:54:07.623309  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:54:07.659308  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:54:07.694048  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:54:07.730482  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:54:07.766483  323157 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1120 20:54:07.766598  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:54:07.766625  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:54:07.766666  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:54:07.780008  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:54:07.780076  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:54:07.780149  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:54:07.788134  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:54:07.788227  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 20:54:07.796010  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:54:07.808930  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:54:07.821862  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:54:07.834855  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:54:07.838597  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:07.850360  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:07.963081  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:07.976660  323157 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:54:07.976968  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:07.979321  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:54:07.980344  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:08.088528  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:08.102382  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:54:08.102458  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:54:08.102723  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m03" to be "Ready" ...
	I1120 20:54:08.105908  323157 node_ready.go:49] node "ha-922218-m03" is "Ready"
	I1120 20:54:08.105930  323157 node_ready.go:38] duration metric: took 3.189835ms for node "ha-922218-m03" to be "Ready" ...
	I1120 20:54:08.105943  323157 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:54:08.105984  323157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:54:08.117937  323157 api_server.go:72] duration metric: took 141.218493ms to wait for apiserver process to appear ...
	I1120 20:54:08.117959  323157 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:54:08.117974  323157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:54:08.122063  323157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:54:08.123003  323157 api_server.go:141] control plane version: v1.34.1
	I1120 20:54:08.123025  323157 api_server.go:131] duration metric: took 5.061002ms to wait for apiserver health ...
	I1120 20:54:08.123033  323157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:54:08.128879  323157 system_pods.go:59] 26 kube-system pods found
	I1120 20:54:08.128913  323157 system_pods.go:61] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:08.128922  323157 system_pods.go:61] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:08.128934  323157 system_pods.go:61] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:08.128940  323157 system_pods.go:61] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:08.128953  323157 system_pods.go:61] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:54:08.128958  323157 system_pods.go:61] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:08.128965  323157 system_pods.go:61] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:08.128968  323157 system_pods.go:61] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:08.128973  323157 system_pods.go:61] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:08.128980  323157 system_pods.go:61] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:08.128984  323157 system_pods.go:61] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:08.128988  323157 system_pods.go:61] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:54:08.128993  323157 system_pods.go:61] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:08.128997  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:08.129005  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:54:08.129009  323157 system_pods.go:61] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:08.129016  323157 system_pods.go:61] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:08.129020  323157 system_pods.go:61] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:08.129026  323157 system_pods.go:61] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:08.129029  323157 system_pods.go:61] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:08.129032  323157 system_pods.go:61] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:08.129036  323157 system_pods.go:61] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:54:08.129042  323157 system_pods.go:61] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:08.129045  323157 system_pods.go:61] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:08.129047  323157 system_pods.go:61] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:08.129050  323157 system_pods.go:61] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:08.129056  323157 system_pods.go:74] duration metric: took 6.018012ms to wait for pod list to return data ...
	I1120 20:54:08.129064  323157 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:54:08.131679  323157 default_sa.go:45] found service account: "default"
	I1120 20:54:08.131697  323157 default_sa.go:55] duration metric: took 2.627778ms for default service account to be created ...
	I1120 20:54:08.131713  323157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:54:08.136580  323157 system_pods.go:86] 26 kube-system pods found
	I1120 20:54:08.136605  323157 system_pods.go:89] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:08.136610  323157 system_pods.go:89] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:08.136614  323157 system_pods.go:89] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:08.136617  323157 system_pods.go:89] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:08.136625  323157 system_pods.go:89] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:54:08.136629  323157 system_pods.go:89] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:08.136637  323157 system_pods.go:89] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:08.136642  323157 system_pods.go:89] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:08.136647  323157 system_pods.go:89] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:08.136652  323157 system_pods.go:89] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:08.136656  323157 system_pods.go:89] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:08.136661  323157 system_pods.go:89] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:54:08.136666  323157 system_pods.go:89] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:08.136670  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:08.136676  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:54:08.136680  323157 system_pods.go:89] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:08.136685  323157 system_pods.go:89] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:08.136689  323157 system_pods.go:89] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:08.136693  323157 system_pods.go:89] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:08.136696  323157 system_pods.go:89] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:08.136710  323157 system_pods.go:89] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:08.136718  323157 system_pods.go:89] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:54:08.136721  323157 system_pods.go:89] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:08.136724  323157 system_pods.go:89] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:08.136727  323157 system_pods.go:89] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:08.136730  323157 system_pods.go:89] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:08.136739  323157 system_pods.go:126] duration metric: took 5.020694ms to wait for k8s-apps to be running ...
	I1120 20:54:08.136745  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:08.136787  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:08.150040  323157 system_svc.go:56] duration metric: took 13.283775ms WaitForService to wait for kubelet
	I1120 20:54:08.150069  323157 kubeadm.go:587] duration metric: took 173.353654ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:08.150089  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:08.153814  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153839  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153854  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153860  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153866  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153871  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153876  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153888  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153894  323157 node_conditions.go:105] duration metric: took 3.799942ms to run NodePressure ...
	I1120 20:54:08.153910  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:08.153941  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:08.155986  323157 out.go:203] 
	I1120 20:54:08.157318  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:08.157412  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.158743  323157 out.go:179] * Starting "ha-922218-m04" worker node in "ha-922218" cluster
	I1120 20:54:08.159836  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:54:08.160869  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:54:08.161862  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:54:08.161877  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:54:08.161937  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:54:08.161978  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:54:08.161992  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:54:08.162094  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.182859  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:54:08.182880  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:54:08.182897  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:54:08.182927  323157 start.go:360] acquireMachinesLock for ha-922218-m04: {Name:mk1c4e4c260415277383e4e2d7891bdf9d980713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:54:08.182984  323157 start.go:364] duration metric: took 40.112µs to acquireMachinesLock for "ha-922218-m04"
	I1120 20:54:08.183005  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:54:08.183013  323157 fix.go:54] fixHost starting: m04
	I1120 20:54:08.183210  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:54:08.201956  323157 fix.go:112] recreateIfNeeded on ha-922218-m04: state=Stopped err=<nil>
	W1120 20:54:08.201985  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:54:08.203921  323157 out.go:252] * Restarting existing docker container for "ha-922218-m04" ...
	I1120 20:54:08.203990  323157 cli_runner.go:164] Run: docker start ha-922218-m04
	I1120 20:54:08.500882  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:54:08.520205  323157 kic.go:430] container "ha-922218-m04" state is running.
	I1120 20:54:08.520698  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:08.539598  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.539924  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:54:08.540000  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:08.558817  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:08.559028  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:08.559039  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:54:08.559647  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56832->127.0.0.1:32823: read: connection reset by peer
	I1120 20:54:11.694470  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m04
	
	I1120 20:54:11.694498  323157 ubuntu.go:182] provisioning hostname "ha-922218-m04"
	I1120 20:54:11.694556  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:11.713721  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:11.714041  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:11.714063  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m04 && echo "ha-922218-m04" | sudo tee /etc/hostname
	I1120 20:54:11.857712  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m04
	
	I1120 20:54:11.857805  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:11.876191  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:11.876435  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:11.876453  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:54:12.008064  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:54:12.008105  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:54:12.008131  323157 ubuntu.go:190] setting up certificates
	I1120 20:54:12.008149  323157 provision.go:84] configureAuth start
	I1120 20:54:12.008245  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:12.026345  323157 provision.go:143] copyHostCerts
	I1120 20:54:12.026390  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:12.026424  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:54:12.026431  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:12.026501  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:54:12.026600  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:12.026623  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:54:12.026630  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:12.026671  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:54:12.026742  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:12.026767  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:54:12.026776  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:12.026803  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:54:12.026878  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m04 san=[127.0.0.1 192.168.49.5 ha-922218-m04 localhost minikube]
	I1120 20:54:12.101540  323157 provision.go:177] copyRemoteCerts
	I1120 20:54:12.101615  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:54:12.101661  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.120979  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.218812  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:54:12.218866  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:54:12.237906  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:54:12.237973  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:54:12.256242  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:54:12.256298  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:54:12.274435  323157 provision.go:87] duration metric: took 266.26509ms to configureAuth
	I1120 20:54:12.274472  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:54:12.274774  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:12.274937  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.294444  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:12.294713  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:12.294742  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:54:12.585665  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:54:12.585696  323157 machine.go:97] duration metric: took 4.045752536s to provisionDockerMachine
	I1120 20:54:12.585712  323157 start.go:293] postStartSetup for "ha-922218-m04" (driver="docker")
	I1120 20:54:12.585734  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:54:12.585814  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:54:12.585872  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.604768  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.701189  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:54:12.705103  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:54:12.705131  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:54:12.705142  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:54:12.705203  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:54:12.705316  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:54:12.705328  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:54:12.705436  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:54:12.713808  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:12.733683  323157 start.go:296] duration metric: took 147.949948ms for postStartSetup
	I1120 20:54:12.733781  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:54:12.733836  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.752642  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.846722  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:54:12.851576  323157 fix.go:56] duration metric: took 4.668555957s for fixHost
	I1120 20:54:12.851609  323157 start.go:83] releasing machines lock for "ha-922218-m04", held for 4.668610463s
	I1120 20:54:12.851688  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:12.872067  323157 out.go:179] * Found network options:
	I1120 20:54:12.873523  323157 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1120 20:54:12.874579  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874614  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874623  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874645  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874656  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874666  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:54:12.874743  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:54:12.874790  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.874801  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:54:12.874864  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.894599  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.894599  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:13.046495  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:54:13.051522  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:54:13.051600  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:54:13.060371  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:54:13.060402  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:54:13.060441  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:54:13.060496  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:54:13.075603  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:54:13.089123  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:54:13.089184  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:54:13.104495  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:54:13.117935  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:54:13.204636  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:54:13.289453  323157 docker.go:234] disabling docker service ...
	I1120 20:54:13.289527  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:54:13.304738  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:54:13.317782  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:54:13.405405  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:54:13.491709  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:54:13.504420  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:54:13.519371  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:54:13.519439  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.528469  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:54:13.528520  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.537935  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.546887  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.555908  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:54:13.564139  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.573055  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.581595  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.590695  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:54:13.597950  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:54:13.605162  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:13.690911  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:54:13.836871  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:54:13.836951  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:54:13.841421  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:54:13.841486  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:54:13.846169  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:54:13.871670  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:54:13.871776  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:13.899765  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:13.930597  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:54:13.931748  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:54:13.932757  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 20:54:13.933675  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1120 20:54:13.934705  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:54:13.952693  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:54:13.957363  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:13.968716  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:54:13.969001  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:13.969254  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:54:13.988111  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:54:13.988373  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.5
	I1120 20:54:13.988385  323157 certs.go:195] generating shared ca certs ...
	I1120 20:54:13.988399  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:54:13.988540  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:54:13.988575  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:54:13.988589  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:54:13.988603  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:54:13.988615  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:54:13.988628  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:54:13.988691  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:54:13.988719  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:54:13.988729  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:54:13.988750  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:54:13.988771  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:54:13.988792  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:54:13.988827  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:13.988853  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:13.988866  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:54:13.988881  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:54:13.988902  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:54:14.007643  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:54:14.026465  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:54:14.045259  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:54:14.064924  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:54:14.083817  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:54:14.101377  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:54:14.119564  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:54:14.126329  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.134374  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:54:14.142273  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.146139  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.146194  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.182277  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:54:14.190606  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.198830  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:54:14.206817  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.210855  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.210906  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.245946  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:54:14.254083  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.261737  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:54:14.269638  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.273524  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.273580  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.308064  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:54:14.316236  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:54:14.320194  323157 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:54:14.320268  323157 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 20:54:14.320379  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:54:14.320454  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:54:14.328815  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:54:14.328872  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 20:54:14.336516  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:54:14.349467  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:54:14.362001  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:54:14.365657  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:14.375549  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:14.458116  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:14.472066  323157 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 20:54:14.472382  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:14.474034  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:54:14.474976  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:14.559289  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:14.572777  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:54:14.572849  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:54:14.573080  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m04" to be "Ready" ...
	W1120 20:54:16.576678  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	W1120 20:54:19.076525  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	W1120 20:54:21.078346  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	I1120 20:54:22.076345  323157 node_ready.go:49] node "ha-922218-m04" is "Ready"
	I1120 20:54:22.076377  323157 node_ready.go:38] duration metric: took 7.503280123s for node "ha-922218-m04" to be "Ready" ...
	I1120 20:54:22.076397  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:22.076458  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:22.089909  323157 system_svc.go:56] duration metric: took 13.491851ms WaitForService to wait for kubelet
	I1120 20:54:22.089941  323157 kubeadm.go:587] duration metric: took 7.617823089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:22.089966  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:22.093121  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093142  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093154  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093158  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093161  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093165  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093170  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093175  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093180  323157 node_conditions.go:105] duration metric: took 3.207725ms to run NodePressure ...
	I1120 20:54:22.093197  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:22.093255  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:22.093568  323157 ssh_runner.go:195] Run: rm -f paused
	I1120 20:54:22.097398  323157 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:54:22.097827  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 20:54:22.109570  323157 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2msz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.119449  323157 pod_ready.go:94] pod "coredns-66bc5c9577-2msz7" is "Ready"
	I1120 20:54:22.119483  323157 pod_ready.go:86] duration metric: took 9.881192ms for pod "coredns-66bc5c9577-2msz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.119494  323157 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kd4l6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.131629  323157 pod_ready.go:94] pod "coredns-66bc5c9577-kd4l6" is "Ready"
	I1120 20:54:22.131656  323157 pod_ready.go:86] duration metric: took 12.154214ms for pod "coredns-66bc5c9577-kd4l6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.134158  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.138697  323157 pod_ready.go:94] pod "etcd-ha-922218" is "Ready"
	I1120 20:54:22.138722  323157 pod_ready.go:86] duration metric: took 4.537439ms for pod "etcd-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.138729  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.142874  323157 pod_ready.go:94] pod "etcd-ha-922218-m02" is "Ready"
	I1120 20:54:22.142900  323157 pod_ready.go:86] duration metric: took 4.166255ms for pod "etcd-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.142909  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.298304  323157 request.go:683] "Waited before sending request" delay="155.234553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-922218-m03"
	I1120 20:54:22.498845  323157 request.go:683] "Waited before sending request" delay="197.338738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:22.501969  323157 pod_ready.go:94] pod "etcd-ha-922218-m03" is "Ready"
	I1120 20:54:22.502000  323157 pod_ready.go:86] duration metric: took 359.082878ms for pod "etcd-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.698517  323157 request.go:683] "Waited before sending request" delay="196.343264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1120 20:54:22.702321  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.898835  323157 request.go:683] "Waited before sending request" delay="196.37899ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218"
	I1120 20:54:23.098414  323157 request.go:683] "Waited before sending request" delay="196.292789ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218"
	I1120 20:54:23.101586  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218" is "Ready"
	I1120 20:54:23.101613  323157 pod_ready.go:86] duration metric: took 399.267945ms for pod "kube-apiserver-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.101634  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.299099  323157 request.go:683] "Waited before sending request" delay="197.354769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218-m02"
	I1120 20:54:23.498968  323157 request.go:683] "Waited before sending request" delay="196.361911ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:23.502012  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218-m02" is "Ready"
	I1120 20:54:23.502037  323157 pod_ready.go:86] duration metric: took 400.398297ms for pod "kube-apiserver-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.502045  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.698407  323157 request.go:683] "Waited before sending request" delay="196.284088ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218-m03"
	I1120 20:54:23.899090  323157 request.go:683] "Waited before sending request" delay="197.347334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:23.902334  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218-m03" is "Ready"
	I1120 20:54:23.902359  323157 pod_ready.go:86] duration metric: took 400.308088ms for pod "kube-apiserver-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.098830  323157 request.go:683] "Waited before sending request" delay="196.34133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 20:54:24.102694  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.299178  323157 request.go:683] "Waited before sending request" delay="196.360417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218"
	I1120 20:54:24.499104  323157 request.go:683] "Waited before sending request" delay="196.347724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218"
	I1120 20:54:24.502309  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218" is "Ready"
	I1120 20:54:24.502336  323157 pod_ready.go:86] duration metric: took 399.617093ms for pod "kube-controller-manager-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.502348  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.698782  323157 request.go:683] "Waited before sending request" delay="196.335349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218-m02"
	I1120 20:54:24.898597  323157 request.go:683] "Waited before sending request" delay="196.345917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:24.901960  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218-m02" is "Ready"
	I1120 20:54:24.901992  323157 pod_ready.go:86] duration metric: took 399.637685ms for pod "kube-controller-manager-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.902001  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.098365  323157 request.go:683] "Waited before sending request" delay="196.278218ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218-m03"
	I1120 20:54:25.299280  323157 request.go:683] "Waited before sending request" delay="197.379888ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:25.302430  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218-m03" is "Ready"
	I1120 20:54:25.302455  323157 pod_ready.go:86] duration metric: took 400.448425ms for pod "kube-controller-manager-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.498873  323157 request.go:683] "Waited before sending request" delay="196.293203ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 20:54:25.502860  323157 pod_ready.go:83] waiting for pod "kube-proxy-4cpch" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.698288  323157 request.go:683] "Waited before sending request" delay="195.281134ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cpch"
	I1120 20:54:25.898934  323157 request.go:683] "Waited before sending request" delay="197.356231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:25.902128  323157 pod_ready.go:94] pod "kube-proxy-4cpch" is "Ready"
	I1120 20:54:25.902154  323157 pod_ready.go:86] duration metric: took 399.270347ms for pod "kube-proxy-4cpch" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.902162  323157 pod_ready.go:83] waiting for pod "kube-proxy-hjm8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:26.098606  323157 request.go:683] "Waited before sending request" delay="196.346655ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjm8j"
	I1120 20:54:26.299163  323157 request.go:683] "Waited before sending request" delay="197.345494ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:26.498649  323157 request.go:683] "Waited before sending request" delay="96.287539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjm8j"
	I1120 20:54:26.699151  323157 request.go:683] "Waited before sending request" delay="197.392783ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:27.098399  323157 request.go:683] "Waited before sending request" delay="192.27694ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:27.498455  323157 request.go:683] "Waited before sending request" delay="92.237627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	W1120 20:54:27.908326  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:29.909034  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:32.408730  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:34.908689  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:37.408823  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:39.908861  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:42.408698  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:44.908702  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:47.408397  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:49.409469  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:51.908163  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:53.908996  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:56.408061  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:58.408624  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:00.908495  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:02.908955  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:05.408405  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:07.909016  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:10.408037  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:12.408417  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:14.908340  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:17.409065  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:19.908332  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:21.908889  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:24.408759  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:26.908929  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:29.408434  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:31.408588  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:33.409210  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:35.908636  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:37.909250  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:40.410051  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:42.909105  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:45.408430  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:47.408740  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:49.908450  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:52.409005  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:54.907859  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:56.908189  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:58.908541  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:00.909373  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:03.408429  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:05.408564  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:07.908140  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:09.908306  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:11.908938  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:14.408871  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:16.907877  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:18.907974  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:20.908614  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:23.408900  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:25.908472  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:28.408373  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:30.408570  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:32.408832  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:34.909276  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:37.408137  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:39.409076  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:41.409464  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:43.908812  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:46.408702  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:48.908615  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:51.408026  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:53.408283  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:55.408942  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:57.909263  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:00.408692  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:02.409101  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:04.907598  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:06.908152  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:08.909063  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:11.408240  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:13.908776  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:16.408622  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:18.908974  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:21.409451  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:23.908489  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:25.908547  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:27.909262  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:30.408274  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:32.409046  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:34.908267  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:37.408193  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:39.408371  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:41.908734  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:43.909408  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:46.408806  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:48.908938  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:51.408993  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:53.908365  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:55.908521  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:57.918887  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:00.408852  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:02.410752  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:04.909111  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:07.409095  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:09.908605  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:12.409207  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:14.409540  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:16.908532  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:18.909206  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:21.408380  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	I1120 20:58:22.098421  323157 pod_ready.go:86] duration metric: took 3m56.196242024s for pod "kube-proxy-hjm8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 20:58:22.098463  323157 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 20:58:22.098478  323157 pod_ready.go:40] duration metric: took 4m0.001055692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:58:22.100129  323157 out.go:203] 
	W1120 20:58:22.101328  323157 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 20:58:22.102425  323157 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.179274516Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.179299573Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.179311849Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.183045155Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.183072535Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.18309208Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.18663601Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.186665457Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.186685532Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.190271629Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.190305928Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.190331869Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.193887848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.193915839Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.344973428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2a8081bc-d00c-414c-a1b1-9cbbeb6545fc name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.34605763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=97d6402c-24a3-4d7a-a623-2f33e58951f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.347291242Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9641ec35-68ba-4c1b-9d66-1d5bd6212949 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.347491562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.352874189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.353102373Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc9d1e539cccfade6ddf8be32aa73b0c357fac2f392b0db94ea11587ea75a0d0/merged/etc/passwd: no such file or directory"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.35313636Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc9d1e539cccfade6ddf8be32aa73b0c357fac2f392b0db94ea11587ea75a0d0/merged/etc/group: no such file or directory"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.353871031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.386863103Z" level=info msg="Created container eb00966d573d1c10c59d3ed85f70753d14532543062430985fb23499c2323330: kube-system/storage-provisioner/storage-provisioner" id=9641ec35-68ba-4c1b-9d66-1d5bd6212949 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.387596765Z" level=info msg="Starting container: eb00966d573d1c10c59d3ed85f70753d14532543062430985fb23499c2323330" id=f71dc408-4a18-4daa-ad26-33c0ad73f76c name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.389753227Z" level=info msg="Started container" PID=1426 containerID=eb00966d573d1c10c59d3ed85f70753d14532543062430985fb23499c2323330 description=kube-system/storage-provisioner/storage-provisioner id=f71dc408-4a18-4daa-ad26-33c0ad73f76c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d671375233375d3d75a3a3d4276bdfb8d5b7eec68c6b7eb4ea37982ea5c49d97
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	eb00966d573d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       2                   d671375233375       storage-provisioner                 kube-system
	fac2b6f885b5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       1                   d671375233375       storage-provisioner                 kube-system
	5394a253d0bd7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   cc5ea32e456a2       coredns-66bc5c9577-2msz7            kube-system
	48eb79307d4ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   911861e8c0958       coredns-66bc5c9577-kd4l6            kube-system
	ac1bee818f4ec       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   66b55ad69697f       busybox-7b57f96db7-58ttm            default
	2651827b33b3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 minutes ago       Running             kube-proxy                0                   a4f0d5ddfe85b       kube-proxy-vqk4x                    kube-system
	084c8a1dec078       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               0                   f408c5ac54d0a       kindnet-f6wtm                       kube-system
	65d1e3fad6b2d       ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38   6 minutes ago       Running             kube-vip                  0                   e151e1b9dd106       kube-vip-ha-922218                  kube-system
	8b6d87aa881c9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      0                   0cb5cedb8ce4c       etcd-ha-922218                      kube-system
	406607e74d161       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 minutes ago       Running             kube-controller-manager   0                   3539ecfac7323       kube-controller-manager-ha-922218   kube-system
	9e882a89de870       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 minutes ago       Running             kube-scheduler            0                   2aad348477526       kube-scheduler-ha-922218            kube-system
	45a868d0ee3cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 minutes ago       Running             kube-apiserver            0                   98c45e98892bd       kube-apiserver-ha-922218            kube-system
	
	
	==> coredns [48eb79307d4edc3ce53d60f38b8b610f913c0a64dfbb891e06a119e0d346362c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54940 - 19184 "HINFO IN 64422634499881743.2714493057616586485. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.019551872s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [5394a253d0bd78f37e15200679d74a85d3d2641b2cc3dfede103a3f6b42a4ea3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55011 - 16464 "HINFO IN 7317740798094180439.8936119426908172663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018307873s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-922218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_48_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:48:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-922218
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                e1d19c5d-2aa9-4c37-b403-bdef012a1c79
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-58ttm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 coredns-66bc5c9577-2msz7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m53s
	  kube-system                 coredns-66bc5c9577-kd4l6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m53s
	  kube-system                 etcd-ha-922218                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m59s
	  kube-system                 kindnet-f6wtm                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m54s
	  kube-system                 kube-apiserver-ha-922218             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 kube-controller-manager-ha-922218    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 kube-proxy-vqk4x                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-scheduler-ha-922218             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 kube-vip-ha-922218                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m53s                  kube-proxy       
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-922218 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-922218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-922218 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m59s                  kubelet          Node ha-922218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     9m59s                  kubelet          Node ha-922218 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m59s                  kubelet          Node ha-922218 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           9m55s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  NodeReady                9m43s                  kubelet          Node ha-922218 status is now: NodeReady
	  Normal  RegisteredNode           9m29s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           8m48s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  Starting                 6m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m11s)  kubelet          Node ha-922218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m11s)  kubelet          Node ha-922218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x8 over 6m11s)  kubelet          Node ha-922218 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	
	
	Name:               ha-922218-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T20_48_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:48:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:48:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:48:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:48:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:53:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-922218-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                0aac81f5-0fe4-4a48-b7c8-cca1476ce619
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rsl29                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 etcd-ha-922218-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m26s
	  kube-system                 kindnet-xhlv4                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m27s
	  kube-system                 kube-apiserver-ha-922218-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 kube-controller-manager-ha-922218-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 kube-proxy-hjm8j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 kube-scheduler-ha-922218-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 kube-vip-ha-922218-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m24s                  kube-proxy       
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   RegisteredNode           9m24s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   RegisteredNode           8m48s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m13s (x8 over 7m13s)  kubelet          Node ha-922218-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m13s (x8 over 7m13s)  kubelet          Node ha-922218-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m13s (x8 over 7m13s)  kubelet          Node ha-922218-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m7s                   node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m9s (x8 over 6m9s)    kubelet          Node ha-922218-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m9s (x8 over 6m9s)    kubelet          Node ha-922218-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m9s (x8 over 6m9s)    kubelet          Node ha-922218-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s                   node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   RegisteredNode           6m2s                   node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Warning  ContainerGCFailed        5m9s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	
	
	Name:               ha-922218-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T20_49_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:49:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:56:37 +0000   Thu, 20 Nov 2025 20:54:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:56:37 +0000   Thu, 20 Nov 2025 20:54:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:56:37 +0000   Thu, 20 Nov 2025 20:54:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:56:37 +0000   Thu, 20 Nov 2025 20:54:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-922218-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                6f616264-3baf-4d2c-923d-84129734b811
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-94vcx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 etcd-ha-922218-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m46s
	  kube-system                 kindnet-8ql4z                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m46s
	  kube-system                 kube-apiserver-ha-922218-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-controller-manager-ha-922218-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-proxy-4cpch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kube-system                 kube-scheduler-ha-922218-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-vip-ha-922218-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m44s                  kube-proxy       
	  Normal  RegisteredNode           8m44s                  node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	  Normal  RegisteredNode           8m44s                  node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	  Normal  RegisteredNode           8m43s                  node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	  Normal  NodeNotReady             5m12s                  node-controller  Node ha-922218-m03 status is now: NodeNotReady
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node ha-922218-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node ha-922218-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x8 over 4m22s)  kubelet          Node ha-922218-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-922218-m03 event: Registered Node ha-922218-m03 in Controller
	
	
	Name:               ha-922218-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T20_50_17_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:50:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:57:04 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:57:04 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:57:04 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:57:04 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-922218-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                dba8f593-ce38-490f-a745-3998b81f9342
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-q78zp       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m6s
	  kube-system                 kube-proxy-hz8k6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  Starting                 8m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m6s (x3 over 8m6s)    kubelet          Node ha-922218-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m6s (x3 over 8m6s)    kubelet          Node ha-922218-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m6s (x3 over 8m6s)    kubelet          Node ha-922218-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           8m4s                   node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           8m4s                   node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           8m3s                   node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  NodeReady                7m53s                  kubelet          Node ha-922218-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  NodeNotReady             5m12s                  node-controller  Node ha-922218-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m15s)  kubelet          Node ha-922218-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m15s)  kubelet          Node ha-922218-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x8 over 4m15s)  kubelet          Node ha-922218-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	[Nov20 20:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.053479] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023936] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +2.047762] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +4.031673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +8.127416] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[ +16.382740] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 20:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	
	
	==> etcd [8b6d87aa881c9d7ce48cf020cc5a82bcd71165681bd09bdbef589896ef08b244] <==
	{"level":"warn","ts":"2025-11-20T20:53:59.931148Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:53:59.996144Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.031290Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.111651Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.131021Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.230813Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.331158Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.362556Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.383280Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.389766Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.392866Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.412603Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.423856Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:00.430439Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-20T20:54:01.253318Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"5db77087b5cfd589","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T20:54:01.253392Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5db77087b5cfd589","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-20T20:54:01.856395Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5db77087b5cfd589","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-20T20:54:01.856444Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:54:01.856483Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:54:01.858361Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5db77087b5cfd589","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-20T20:54:01.858402Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:54:01.865181Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:54:01.865266Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:54:02.958459Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5db77087b5cfd589","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T20:54:02.958492Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5db77087b5cfd589","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 20:58:23 up  3:40,  0 user,  load average: 0.14, 0.69, 0.86
	Linux ha-922218 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [084c8a1dec0788f04fd73911d864fb9472875659cb154976c01f44fab76b9c71] <==
	I1120 20:57:49.171740       1 main.go:301] handling current node
	I1120 20:57:59.170643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:57:59.170674       1 main.go:301] handling current node
	I1120 20:57:59.170688       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 20:57:59.170693       1 main.go:324] Node ha-922218-m02 has CIDR [10.244.1.0/24] 
	I1120 20:57:59.170868       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 20:57:59.170876       1 main.go:324] Node ha-922218-m03 has CIDR [10.244.2.0/24] 
	I1120 20:57:59.170976       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 20:57:59.170988       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	I1120 20:58:09.175188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:58:09.175257       1 main.go:301] handling current node
	I1120 20:58:09.175275       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 20:58:09.175282       1 main.go:324] Node ha-922218-m02 has CIDR [10.244.1.0/24] 
	I1120 20:58:09.175490       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 20:58:09.175500       1 main.go:324] Node ha-922218-m03 has CIDR [10.244.2.0/24] 
	I1120 20:58:09.175616       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 20:58:09.175626       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	I1120 20:58:19.172321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:58:19.172356       1 main.go:301] handling current node
	I1120 20:58:19.172371       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 20:58:19.172377       1 main.go:324] Node ha-922218-m02 has CIDR [10.244.1.0/24] 
	I1120 20:58:19.172541       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 20:58:19.172550       1 main.go:324] Node ha-922218-m03 has CIDR [10.244.2.0/24] 
	I1120 20:58:19.172630       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 20:58:19.172637       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [45a868d0ee3cc88db4f8ceed46d0f4eddce85b589457dcbb93848dd871b099bf] <==
	I1120 20:52:18.209169       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 20:52:18.209633       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 20:52:18.209681       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:52:18.215582       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 20:52:18.215701       1 aggregator.go:171] initial CRD sync complete...
	I1120 20:52:18.215715       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 20:52:18.215722       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:52:18.215728       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:52:18.215793       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 20:52:18.216200       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 20:52:18.216254       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 20:52:18.227275       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	E1120 20:52:18.227303       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 20:52:18.227312       1 policy_source.go:240] refreshing policies
	I1120 20:52:18.274334       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:52:18.281102       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:18.464733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:52:19.113191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1120 20:52:19.540161       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1120 20:52:19.541903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:52:19.548185       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:52:21.905119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:21.905119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:21.961328       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:52:50.429683       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [406607e74d1618ca02cbf22003052ea65983c0e1235732ec547478bff625b9ff] <==
	I1120 20:52:21.554135       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 20:52:21.556473       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:52:21.557545       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:52:21.559723       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 20:52:21.559753       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 20:52:21.559790       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 20:52:21.559844       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 20:52:21.559853       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 20:52:21.559860       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 20:52:21.561946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 20:52:21.563127       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 20:52:21.565300       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:52:21.565324       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:52:21.569631       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:52:21.571824       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:52:21.575158       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:52:21.577485       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:52:21.578753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:52:21.579777       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 20:52:21.706277       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-c9k7m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-c9k7m\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 20:52:21.706377       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9160caa3-ca5a-482c-9900-6b473b1ad059", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-c9k7m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-c9k7m": the object has been modified; please apply your changes to the latest version and try again
	I1120 20:52:28.494750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-922218-m04"
	I1120 20:53:11.511054       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1120 20:54:01.531547       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 20:54:21.706569       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-922218-m04"
	
	
	==> kube-proxy [2651827b33b3dc42e63a885b5752ffd7b702afd0a4d5394a2196bddf43144ed2] <==
	I1120 20:52:18.767903       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:52:18.833437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 20:52:21.904740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-922218&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1120 20:52:23.333884       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:52:23.333927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 20:52:23.334020       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:52:23.353055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:52:23.353120       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:52:23.358578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:52:23.358901       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:52:23.358929       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:23.360415       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:52:23.360448       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:52:23.360457       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:52:23.360482       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:52:23.360493       1 config.go:200] "Starting service config controller"
	I1120 20:52:23.360512       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:52:23.360533       1 config.go:309] "Starting node config controller"
	I1120 20:52:23.360545       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:52:23.360553       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:52:23.460696       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:52:23.460696       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:52:23.460949       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9e882a89de870c006dd62af4f419f69f18af696b07ee1686b859a279092e03e0] <==
	I1120 20:52:13.226234       1 serving.go:386] Generated self-signed cert in-memory
	W1120 20:52:18.146292       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 20:52:18.146332       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 20:52:18.146353       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 20:52:18.146363       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 20:52:18.193971       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 20:52:18.194007       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:18.196860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:52:18.196907       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:52:18.197257       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 20:52:18.197345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 20:52:18.298064       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.266715     758 kubelet_node_status.go:124] "Node was previously registered" node="ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.266820     758 kubelet_node_status.go:78] "Successfully registered node" node="ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.266859     758 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.267754     758 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:52:18 ha-922218 kubelet[758]: E1120 20:52:18.275063     758 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-922218\" already exists" pod="kube-system/kube-controller-manager-ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.317394     758 apiserver.go:52] "Watching apiserver"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.321290     758 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-922218" podUID="c76017a7-d5b6-4722-a147-ee435bda3cdb"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.336233     758 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.336274     758 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.345879     758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29385aad8e38316abef8aa5e6851d452" path="/var/lib/kubelet/pods/29385aad8e38316abef8aa5e6851d452/volumes"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.371273     758 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-922218" podUID="c76017a7-d5b6-4722-a147-ee435bda3cdb"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.402198     758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-922218" podStartSLOduration=0.402177345 podStartE2EDuration="402.177345ms" podCreationTimestamp="2025-11-20 20:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:18.4018385 +0000 UTC m=+6.147881170" watchObservedRunningTime="2025-11-20 20:52:18.402177345 +0000 UTC m=+6.148220017"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.420232     758 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460065     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/319a35be-d837-443f-9469-651c49930906-cni-cfg\") pod \"kindnet-f6wtm\" (UID: \"319a35be-d837-443f-9469-651c49930906\") " pod="kube-system/kindnet-f6wtm"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460113     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fe1dcc4-b41f-4da1-894e-e7fe935e3e63-lib-modules\") pod \"kube-proxy-vqk4x\" (UID: \"1fe1dcc4-b41f-4da1-894e-e7fe935e3e63\") " pod="kube-system/kube-proxy-vqk4x"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460142     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/319a35be-d837-443f-9469-651c49930906-xtables-lock\") pod \"kindnet-f6wtm\" (UID: \"319a35be-d837-443f-9469-651c49930906\") " pod="kube-system/kindnet-f6wtm"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460163     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6-tmp\") pod \"storage-provisioner\" (UID: \"ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460415     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fe1dcc4-b41f-4da1-894e-e7fe935e3e63-xtables-lock\") pod \"kube-proxy-vqk4x\" (UID: \"1fe1dcc4-b41f-4da1-894e-e7fe935e3e63\") " pod="kube-system/kube-proxy-vqk4x"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460598     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/319a35be-d837-443f-9469-651c49930906-lib-modules\") pod \"kindnet-f6wtm\" (UID: \"319a35be-d837-443f-9469-651c49930906\") " pod="kube-system/kindnet-f6wtm"
	Nov 20 20:52:19 ha-922218 kubelet[758]: I1120 20:52:19.390398     758 scope.go:117] "RemoveContainer" containerID="dc3394ace8f04ea97097c48698b5fbe1c460c7357aea54da0f99a76c8c5578c6"
	Nov 20 20:52:20 ha-922218 kubelet[758]: I1120 20:52:20.397922     758 scope.go:117] "RemoveContainer" containerID="dc3394ace8f04ea97097c48698b5fbe1c460c7357aea54da0f99a76c8c5578c6"
	Nov 20 20:52:20 ha-922218 kubelet[758]: I1120 20:52:20.398303     758 scope.go:117] "RemoveContainer" containerID="fac2b6f885b5faaecbbea594acb9ce5d1f4225dec03b4f4c52d89ed9284f7411"
	Nov 20 20:52:20 ha-922218 kubelet[758]: E1120 20:52:20.398477     758 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6)\"" pod="kube-system/storage-provisioner" podUID="ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6"
	Nov 20 20:52:25 ha-922218 kubelet[758]: I1120 20:52:25.559555     758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 20:52:35 ha-922218 kubelet[758]: I1120 20:52:35.344381     758 scope.go:117] "RemoveContainer" containerID="fac2b6f885b5faaecbbea594acb9ce5d1f4225dec03b4f4c52d89ed9284f7411"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-922218 -n ha-922218
helpers_test.go:269: (dbg) Run:  kubectl --context ha-922218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (424.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-922218" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-922218\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-922218\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-922218\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-922218
helpers_test.go:243: (dbg) docker inspect ha-922218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2",
	        "Created": "2025-11-20T20:48:08.305484419Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323354,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T20:52:05.575784584Z",
	            "FinishedAt": "2025-11-20T20:52:04.865809974Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/hosts",
	        "LogPath": "/var/lib/docker/containers/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2/f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2-json.log",
	        "Name": "/ha-922218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-922218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-922218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f4fe7dc2831e0abe6d3df137fd6d01eced7c40f631b05be8eb32f86a00cb16b2",
	                "LowerDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8e305fca402b926880c9870fe726e187665ae3fc1d8dfdd526371b35734845e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-922218",
	                "Source": "/var/lib/docker/volumes/ha-922218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-922218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-922218",
	                "name.minikube.sigs.k8s.io": "ha-922218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "66dd206f11a0a0a6ef63e1aaf681e70420082aaa4fdf320b0caa28316d460919",
	            "SandboxKey": "/var/run/docker/netns/66dd206f11a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-922218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "acedad58d8d6133060c432e76b858ca8895634a834fb6c75b12b58c6c2b70de4",
	                    "EndpointID": "5cd5b0e30b914b09cb85aa3289ff87d176f3621330c5c3cc1edd6559a4bda334",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ba:7d:28:02:80:9c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-922218",
	                        "f4fe7dc2831e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-922218 -n ha-922218
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 logs -n 25: (1.124850071s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-922218 ssh -n ha-922218-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m02 sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218-m02.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt ha-922218-m04:/home/docker/cp-test_ha-922218-m03_ha-922218-m04.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218-m04.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp testdata/cp-test.txt ha-922218-m04:/home/docker/cp-test.txt                                                             │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1859247620/001/cp-test_ha-922218-m04.txt │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218:/home/docker/cp-test_ha-922218-m04_ha-922218.txt                       │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218 sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218.txt                                                 │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218-m02:/home/docker/cp-test_ha-922218-m04_ha-922218-m02.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m02 sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218-m02.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ cp      │ ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218-m03:/home/docker/cp-test_ha-922218-m04_ha-922218-m03.txt               │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ ssh     │ ha-922218 ssh -n ha-922218-m03 sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218-m03.txt                                         │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:50 UTC │
	│ node    │ ha-922218 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:50 UTC │ 20 Nov 25 20:51 UTC │
	│ node    │ ha-922218 node start m02 --alsologtostderr -v 5                                                                                      │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:51 UTC │
	│ node    │ ha-922218 node list --alsologtostderr -v 5                                                                                           │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │                     │
	│ stop    │ ha-922218 stop --alsologtostderr -v 5                                                                                                │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:51 UTC │ 20 Nov 25 20:52 UTC │
	│ start   │ ha-922218 start --wait true --alsologtostderr -v 5                                                                                   │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:52 UTC │                     │
	│ node    │ ha-922218 node list --alsologtostderr -v 5                                                                                           │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:58 UTC │                     │
	│ node    │ ha-922218 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-922218 │ jenkins │ v1.37.0 │ 20 Nov 25 20:58 UTC │ 20 Nov 25 20:58 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:52:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:52:05.328764  323157 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:52:05.329077  323157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:05.329088  323157 out.go:374] Setting ErrFile to fd 2...
	I1120 20:52:05.329095  323157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:52:05.329358  323157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:52:05.329815  323157 out.go:368] Setting JSON to false
	I1120 20:52:05.330759  323157 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12867,"bootTime":1763659058,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:52:05.330873  323157 start.go:143] virtualization: kvm guest
	I1120 20:52:05.332897  323157 out.go:179] * [ha-922218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:52:05.334089  323157 notify.go:221] Checking for updates...
	I1120 20:52:05.334111  323157 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:52:05.335153  323157 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:52:05.336342  323157 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:05.337453  323157 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:52:05.338644  323157 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:52:05.339840  323157 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:52:05.341429  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:05.341547  323157 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:52:05.366166  323157 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:52:05.366337  323157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:05.429868  323157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-20 20:52:05.418170855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:05.429981  323157 docker.go:319] overlay module found
	I1120 20:52:05.432415  323157 out.go:179] * Using the docker driver based on existing profile
	I1120 20:52:05.433478  323157 start.go:309] selected driver: docker
	I1120 20:52:05.433497  323157 start.go:930] validating driver "docker" against &{Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:05.433601  323157 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:52:05.433679  323157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:52:05.497705  323157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-20 20:52:05.48528978 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:52:05.498702  323157 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:52:05.498750  323157 cni.go:84] Creating CNI manager for ""
	I1120 20:52:05.498813  323157 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 20:52:05.498895  323157 start.go:353] cluster config:
	{Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:05.501099  323157 out.go:179] * Starting "ha-922218" primary control-plane node in "ha-922218" cluster
	I1120 20:52:05.502199  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:52:05.503398  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:05.504658  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:05.504699  323157 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:52:05.504719  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:52:05.504760  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:05.504824  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:52:05.504840  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:52:05.505023  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:05.527904  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:05.527929  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:05.527945  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:05.527985  323157 start.go:360] acquireMachinesLock for ha-922218: {Name:mk7973b5b3e2bce97a45ae60ce14811fb93a6808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:05.528045  323157 start.go:364] duration metric: took 37.272µs to acquireMachinesLock for "ha-922218"
	I1120 20:52:05.528067  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:52:05.528078  323157 fix.go:54] fixHost starting: 
	I1120 20:52:05.528385  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:52:05.546149  323157 fix.go:112] recreateIfNeeded on ha-922218: state=Stopped err=<nil>
	W1120 20:52:05.546186  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:52:05.548148  323157 out.go:252] * Restarting existing docker container for "ha-922218" ...
	I1120 20:52:05.548228  323157 cli_runner.go:164] Run: docker start ha-922218
	I1120 20:52:05.829297  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:52:05.854267  323157 kic.go:430] container "ha-922218" state is running.
	I1120 20:52:05.854754  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:05.879797  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:05.880184  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:05.880316  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:05.902671  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:05.902972  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:05.902987  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:05.903785  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36002->127.0.0.1:32808: read: connection reset by peer
	I1120 20:52:09.038413  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218
	
	I1120 20:52:09.038466  323157 ubuntu.go:182] provisioning hostname "ha-922218"
	I1120 20:52:09.038538  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:09.056776  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:09.057040  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:09.057057  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218 && echo "ha-922218" | sudo tee /etc/hostname
	I1120 20:52:09.198987  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218
	
	I1120 20:52:09.199094  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:09.218187  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:09.218484  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:09.218518  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:09.350283  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:09.350320  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:52:09.350371  323157 ubuntu.go:190] setting up certificates
	I1120 20:52:09.350386  323157 provision.go:84] configureAuth start
	I1120 20:52:09.350452  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:09.368706  323157 provision.go:143] copyHostCerts
	I1120 20:52:09.368743  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:09.368777  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:52:09.368790  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:09.368861  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:52:09.368944  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:09.368963  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:52:09.368970  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:09.368996  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:52:09.369044  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:09.369060  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:52:09.369066  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:09.369089  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:52:09.369139  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218 san=[127.0.0.1 192.168.49.2 ha-922218 localhost minikube]
	I1120 20:52:10.061446  323157 provision.go:177] copyRemoteCerts
	I1120 20:52:10.061522  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:10.061563  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.080281  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.175628  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:52:10.175687  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1120 20:52:10.193744  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:52:10.193807  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:52:10.211340  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:52:10.211404  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:10.229048  323157 provision.go:87] duration metric: took 878.645023ms to configureAuth
	I1120 20:52:10.229077  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:10.229298  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:10.229423  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.247922  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:10.248191  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1120 20:52:10.248210  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:52:10.573365  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:52:10.573392  323157 machine.go:97] duration metric: took 4.693182802s to provisionDockerMachine
	I1120 20:52:10.573407  323157 start.go:293] postStartSetup for "ha-922218" (driver="docker")
	I1120 20:52:10.573426  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:10.573499  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:10.573553  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.593733  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.690092  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:10.693995  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:10.694023  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:10.694034  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:52:10.694094  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:52:10.694185  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:52:10.694199  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:52:10.694322  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:10.702399  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:10.721119  323157 start.go:296] duration metric: took 147.693408ms for postStartSetup
	I1120 20:52:10.721235  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:10.721282  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.739969  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.833630  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:10.838327  323157 fix.go:56] duration metric: took 5.310241763s for fixHost
	I1120 20:52:10.838357  323157 start.go:83] releasing machines lock for "ha-922218", held for 5.310298505s
	I1120 20:52:10.838432  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:52:10.856719  323157 ssh_runner.go:195] Run: cat /version.json
	I1120 20:52:10.856760  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:10.856779  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.856845  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:52:10.876456  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:10.876715  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:52:11.025514  323157 ssh_runner.go:195] Run: systemctl --version
	I1120 20:52:11.032462  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:52:11.068010  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:11.072912  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:11.072991  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:11.081063  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:52:11.081087  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:52:11.081118  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:11.081168  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:52:11.095970  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:52:11.108445  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:11.108509  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:11.123137  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:11.135601  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:11.213922  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:11.297509  323157 docker.go:234] disabling docker service ...
	I1120 20:52:11.297579  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:11.312344  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:11.324558  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:11.404570  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:11.482324  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:11.495121  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:11.509896  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:52:11.509955  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.519009  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:52:11.519074  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.528081  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.536889  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.546294  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:11.554800  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.563861  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.572378  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:11.581389  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:11.589599  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:11.597300  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:11.674297  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:52:11.817850  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:52:11.817928  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:52:11.822052  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:52:11.822102  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:52:11.826068  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:52:11.851404  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:52:11.851494  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:52:11.879770  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:52:11.909889  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:52:11.911081  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:52:11.928829  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:52:11.933285  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:11.944894  323157 kubeadm.go:884] updating cluster {Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 20:52:11.945069  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:11.945159  323157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:11.979530  323157 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:52:11.979551  323157 crio.go:433] Images already preloaded, skipping extraction
	I1120 20:52:11.979599  323157 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 20:52:12.008103  323157 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 20:52:12.008127  323157 cache_images.go:86] Images are preloaded, skipping loading
	I1120 20:52:12.008135  323157 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1120 20:52:12.008259  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:52:12.008342  323157 ssh_runner.go:195] Run: crio config
	I1120 20:52:12.053953  323157 cni.go:84] Creating CNI manager for ""
	I1120 20:52:12.053974  323157 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1120 20:52:12.053990  323157 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 20:52:12.054013  323157 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-922218 NodeName:ha-922218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 20:52:12.054128  323157 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-922218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 20:52:12.054146  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:52:12.054186  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:52:12.067315  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:52:12.067457  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:52:12.067537  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:52:12.075923  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:52:12.076002  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1120 20:52:12.083739  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1120 20:52:12.096285  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:52:12.109031  323157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1120 20:52:12.121723  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:52:12.134083  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:52:12.137866  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:52:12.148115  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:12.228004  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:52:12.251717  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.2
	I1120 20:52:12.251748  323157 certs.go:195] generating shared ca certs ...
	I1120 20:52:12.251770  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.251938  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:52:12.251981  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:52:12.251992  323157 certs.go:257] generating profile certs ...
	I1120 20:52:12.252071  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:52:12.252098  323157 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09
	I1120 20:52:12.252119  323157 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1120 20:52:12.330376  323157 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 ...
	I1120 20:52:12.330417  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09: {Name:mk6b74f2e5931344472166b62a32edaf4f45744b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.330619  323157 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09 ...
	I1120 20:52:12.330655  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09: {Name:mk229093d7281b814de77a27daa6f3543e470a4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:12.330779  323157 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt.c82d6d09 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt
	I1120 20:52:12.330974  323157 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c82d6d09 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key
	I1120 20:52:12.331167  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:52:12.331190  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:52:12.331230  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:52:12.331254  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:52:12.331277  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:52:12.331295  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:52:12.331313  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:52:12.331331  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:52:12.331349  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:52:12.331428  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:52:12.331475  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:52:12.331490  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:52:12.331519  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:52:12.331552  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:52:12.331587  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:52:12.331662  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:12.331712  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.331735  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.331750  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.332594  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:52:12.353047  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:52:12.370168  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:52:12.387211  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:52:12.405559  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:52:12.422666  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:52:12.441539  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:52:12.460737  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:52:12.479326  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:52:12.497570  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:52:12.515902  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:52:12.534796  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 20:52:12.548189  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:52:12.554678  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.562462  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:52:12.570059  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.573962  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.574018  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:52:12.607754  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:52:12.615941  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.623665  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:52:12.632109  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.636168  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.636242  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:52:12.670187  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:52:12.678284  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.685973  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:52:12.693528  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.697235  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.697293  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:52:12.731035  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:52:12.738959  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:52:12.742968  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:52:12.789124  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:52:12.832435  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:52:12.886449  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:52:12.943193  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:52:12.978550  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:52:13.013640  323157 kubeadm.go:401] StartCluster: {Name:ha-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:52:13.013797  323157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 20:52:13.013859  323157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 20:52:13.049696  323157 cri.go:89] found id: "65d1e3fad6b2daa0d2eb48dff43ccc96c150434dda9afd9eeaf84004fee7ace3"
	I1120 20:52:13.049721  323157 cri.go:89] found id: "8b6d87aa881c9d7ce48cf020cc5a82bcd71165681bd09bdbef589896ef08b244"
	I1120 20:52:13.049727  323157 cri.go:89] found id: "406607e74d1618ca02cbf22003052ea65983c0e1235732ec547478bff625b9ff"
	I1120 20:52:13.049732  323157 cri.go:89] found id: "9e882a89de870c006dd62af4f419f69f18af696b07ee1686b859a279092e03e0"
	I1120 20:52:13.049737  323157 cri.go:89] found id: "45a868d0ee3cc88db4f8ceed46d0f4eddce85b589457dcbb93848dd871b099bf"
	I1120 20:52:13.049741  323157 cri.go:89] found id: ""
	I1120 20:52:13.049788  323157 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 20:52:13.062401  323157 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1120 20:52:13.062470  323157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 20:52:13.070809  323157 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 20:52:13.070832  323157 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 20:52:13.070881  323157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 20:52:13.078757  323157 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:52:13.079306  323157 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-922218" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:13.079441  323157 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "ha-922218" cluster setting kubeconfig missing "ha-922218" context setting]
	I1120 20:52:13.079865  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.080582  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 20:52:13.081160  323157 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 20:52:13.081177  323157 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 20:52:13.081183  323157 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 20:52:13.081188  323157 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 20:52:13.081196  323157 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 20:52:13.081252  323157 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1120 20:52:13.081712  323157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 20:52:13.089447  323157 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1120 20:52:13.089467  323157 kubeadm.go:602] duration metric: took 18.629525ms to restartPrimaryControlPlane
	I1120 20:52:13.089478  323157 kubeadm.go:403] duration metric: took 75.851486ms to StartCluster
	I1120 20:52:13.089496  323157 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.089563  323157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:52:13.090205  323157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:52:13.090465  323157 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:52:13.090490  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:52:13.090499  323157 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 20:52:13.090755  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:13.093327  323157 out.go:179] * Enabled addons: 
	I1120 20:52:13.094381  323157 addons.go:515] duration metric: took 3.879805ms for enable addons: enabled=[]
	I1120 20:52:13.094412  323157 start.go:247] waiting for cluster config update ...
	I1120 20:52:13.094424  323157 start.go:256] writing updated cluster config ...
	I1120 20:52:13.095823  323157 out.go:203] 
	I1120 20:52:13.097078  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:13.097195  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.098642  323157 out.go:179] * Starting "ha-922218-m02" control-plane node in "ha-922218" cluster
	I1120 20:52:13.099780  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:52:13.101045  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:52:13.102201  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:52:13.102233  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:52:13.102244  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:52:13.102316  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:52:13.102330  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:52:13.102446  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.124350  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:52:13.124372  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:52:13.124388  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:52:13.124422  323157 start.go:360] acquireMachinesLock for ha-922218-m02: {Name:mk327cff0c42e8fe5ded9f6386acc07315d39a09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:52:13.124488  323157 start.go:364] duration metric: took 45.103µs to acquireMachinesLock for "ha-922218-m02"
	I1120 20:52:13.124508  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:52:13.124518  323157 fix.go:54] fixHost starting: m02
	I1120 20:52:13.124771  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:52:13.143934  323157 fix.go:112] recreateIfNeeded on ha-922218-m02: state=Stopped err=<nil>
	W1120 20:52:13.143964  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:52:13.149354  323157 out.go:252] * Restarting existing docker container for "ha-922218-m02" ...
	I1120 20:52:13.149455  323157 cli_runner.go:164] Run: docker start ha-922218-m02
	I1120 20:52:13.461778  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:52:13.484306  323157 kic.go:430] container "ha-922218-m02" state is running.
	I1120 20:52:13.484763  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:13.505868  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:52:13.506112  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:52:13.506167  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:13.526643  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:13.526854  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:13.526866  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:52:13.527491  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45258->127.0.0.1:32813: read: connection reset by peer
	I1120 20:52:16.660479  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m02
	
	I1120 20:52:16.660511  323157 ubuntu.go:182] provisioning hostname "ha-922218-m02"
	I1120 20:52:16.660584  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:16.679969  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:16.680183  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:16.680195  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m02 && echo "ha-922218-m02" | sudo tee /etc/hostname
	I1120 20:52:16.821890  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m02
	
	I1120 20:52:16.821965  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:16.839799  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:16.840017  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:16.840033  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:52:16.971112  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:52:16.971145  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:52:16.971166  323157 ubuntu.go:190] setting up certificates
	I1120 20:52:16.971179  323157 provision.go:84] configureAuth start
	I1120 20:52:16.971279  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:16.989488  323157 provision.go:143] copyHostCerts
	I1120 20:52:16.989529  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:16.989560  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:52:16.989569  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:52:16.989635  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:52:16.989719  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:16.989738  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:52:16.989744  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:52:16.989770  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:52:16.989870  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:16.989892  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:52:16.989898  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:52:16.989924  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:52:16.989977  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m02 san=[127.0.0.1 192.168.49.3 ha-922218-m02 localhost minikube]
	I1120 20:52:18.325243  323157 provision.go:177] copyRemoteCerts
	I1120 20:52:18.325315  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:52:18.325359  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.349476  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:18.454303  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:52:18.454394  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:52:18.479542  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:52:18.479667  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 20:52:18.500104  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:52:18.500180  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:52:18.518171  323157 provision.go:87] duration metric: took 1.546978244s to configureAuth
	I1120 20:52:18.518200  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:52:18.518425  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:52:18.518527  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.537190  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:52:18.537424  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1120 20:52:18.537440  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:52:18.895794  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:52:18.895824  323157 machine.go:97] duration metric: took 5.389701302s to provisionDockerMachine
	I1120 20:52:18.895839  323157 start.go:293] postStartSetup for "ha-922218-m02" (driver="docker")
	I1120 20:52:18.895853  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:52:18.895988  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:52:18.896049  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:18.917397  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.017957  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:52:19.023501  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:52:19.023526  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:52:19.023536  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:52:19.023581  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:52:19.023657  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:52:19.023667  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:52:19.023756  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:52:19.033501  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:52:19.054171  323157 start.go:296] duration metric: took 158.315421ms for postStartSetup
	I1120 20:52:19.054290  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:52:19.054332  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.076545  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.179900  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:52:19.186175  323157 fix.go:56] duration metric: took 6.061648548s for fixHost
	I1120 20:52:19.186235  323157 start.go:83] releasing machines lock for "ha-922218-m02", held for 6.061714164s
	I1120 20:52:19.186321  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m02
	I1120 20:52:19.212036  323157 out.go:179] * Found network options:
	I1120 20:52:19.213348  323157 out.go:179]   - NO_PROXY=192.168.49.2
	W1120 20:52:19.214893  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:52:19.214943  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:52:19.215032  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:52:19.215091  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.215108  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:52:19.215187  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m02
	I1120 20:52:19.241015  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.241538  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m02/id_rsa Username:docker}
	I1120 20:52:19.437902  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:52:19.444586  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:52:19.444668  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:52:19.455464  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:52:19.455492  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:52:19.455532  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:52:19.455584  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:52:19.479915  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:52:19.496789  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:52:19.496839  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:52:19.512753  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:52:19.525991  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:52:19.636269  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:52:19.743860  323157 docker.go:234] disabling docker service ...
	I1120 20:52:19.743937  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:52:19.758942  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:52:19.771625  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:52:19.879756  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:52:19.984607  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:52:19.997908  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:52:20.012508  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:52:20.012564  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.021752  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:52:20.021808  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.031377  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.041137  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.050156  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:52:20.058809  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.068260  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.078190  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:52:20.087650  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:52:20.095104  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:52:20.102596  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:52:20.245093  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:53:50.500050  323157 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.254907746s)
	I1120 20:53:50.500099  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:53:50.500170  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:53:50.504526  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:53:50.504579  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:53:50.508365  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:53:50.534774  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:53:50.534864  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:53:50.562018  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:53:50.592115  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:53:50.593411  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:53:50.594685  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:53:50.612868  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:53:50.617151  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:53:50.628089  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:53:50.628365  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:53:50.628586  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:53:50.646653  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:53:50.646897  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.3
	I1120 20:53:50.646915  323157 certs.go:195] generating shared ca certs ...
	I1120 20:53:50.646931  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:53:50.647073  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:53:50.647108  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:53:50.647117  323157 certs.go:257] generating profile certs ...
	I1120 20:53:50.647209  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:53:50.647303  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.c836c87f
	I1120 20:53:50.647340  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:53:50.647354  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:53:50.647371  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:53:50.647384  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:53:50.647397  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:53:50.647409  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:53:50.647421  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:53:50.647433  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:53:50.647458  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:53:50.647511  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:53:50.647546  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:53:50.647555  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:53:50.647579  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:53:50.647605  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:53:50.647625  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:53:50.647667  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:53:50.647693  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:53:50.647706  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:50.647719  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:53:50.647768  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:53:50.665659  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:53:50.755584  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 20:53:50.760041  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 20:53:50.768729  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 20:53:50.772558  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 20:53:50.781784  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 20:53:50.785575  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 20:53:50.794334  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 20:53:50.798078  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 20:53:50.807321  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 20:53:50.811305  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 20:53:50.819736  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 20:53:50.823350  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 20:53:50.831741  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:53:50.849848  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:53:50.867486  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:53:50.884818  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:53:50.902061  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:53:50.919790  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:53:50.937569  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:53:50.955443  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:53:50.972778  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:53:50.990638  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:53:51.008199  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:53:51.026275  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 20:53:51.039905  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 20:53:51.054001  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 20:53:51.068159  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 20:53:51.083445  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 20:53:51.096696  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 20:53:51.109424  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 20:53:51.122677  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:53:51.129308  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.137038  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:53:51.144950  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.148713  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.148764  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:53:51.183638  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:53:51.192271  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.199701  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:53:51.207336  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.211049  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.211109  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:53:51.247556  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:53:51.255756  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.263373  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:53:51.270762  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.274831  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.274886  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:53:51.310488  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:53:51.318664  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:53:51.322469  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:53:51.356447  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:53:51.390490  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:53:51.424733  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:53:51.459076  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:53:51.492960  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:53:51.527319  323157 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1120 20:53:51.527454  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:53:51.527485  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:53:51.527542  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:53:51.541450  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:53:51.541513  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:53:51.541572  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:53:51.549762  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:53:51.549835  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 20:53:51.558197  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:53:51.572021  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:53:51.585070  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:53:51.597674  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:53:51.601380  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:53:51.611235  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:53:51.721067  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:53:51.734155  323157 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:53:51.734528  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:53:51.736279  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:53:51.737724  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:53:51.846124  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:53:51.859674  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:53:51.859761  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:53:51.860000  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m02" to be "Ready" ...
	W1120 20:53:53.863125  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	W1120 20:53:55.863446  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	W1120 20:53:57.863942  323157 node_ready.go:57] node "ha-922218-m02" has "Ready":"False" status (will retry)
	I1120 20:54:00.364328  323157 node_ready.go:49] node "ha-922218-m02" is "Ready"
	I1120 20:54:00.364359  323157 node_ready.go:38] duration metric: took 8.504330619s for node "ha-922218-m02" to be "Ready" ...
	I1120 20:54:00.364381  323157 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:54:00.364433  323157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:54:00.376821  323157 api_server.go:72] duration metric: took 8.642616301s to wait for apiserver process to appear ...
	I1120 20:54:00.376853  323157 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:54:00.376887  323157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:54:00.381080  323157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:54:00.382023  323157 api_server.go:141] control plane version: v1.34.1
	I1120 20:54:00.382047  323157 api_server.go:131] duration metric: took 5.187881ms to wait for apiserver health ...
	I1120 20:54:00.382059  323157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:54:00.388374  323157 system_pods.go:59] 26 kube-system pods found
	I1120 20:54:00.388402  323157 system_pods.go:61] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:00.388407  323157 system_pods.go:61] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:00.388410  323157 system_pods.go:61] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:00.388414  323157 system_pods.go:61] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:00.388417  323157 system_pods.go:61] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running
	I1120 20:54:00.388422  323157 system_pods.go:61] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:00.388425  323157 system_pods.go:61] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:00.388428  323157 system_pods.go:61] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:00.388435  323157 system_pods.go:61] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:00.388440  323157 system_pods.go:61] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:00.388445  323157 system_pods.go:61] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:00.388448  323157 system_pods.go:61] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running
	I1120 20:54:00.388453  323157 system_pods.go:61] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:00.388461  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:00.388465  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running
	I1120 20:54:00.388468  323157 system_pods.go:61] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:00.388473  323157 system_pods.go:61] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:00.388479  323157 system_pods.go:61] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:00.388482  323157 system_pods.go:61] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:00.388485  323157 system_pods.go:61] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:00.388491  323157 system_pods.go:61] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:00.388494  323157 system_pods.go:61] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running
	I1120 20:54:00.388496  323157 system_pods.go:61] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:00.388499  323157 system_pods.go:61] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:00.388502  323157 system_pods.go:61] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:00.388505  323157 system_pods.go:61] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:00.388510  323157 system_pods.go:74] duration metric: took 6.446272ms to wait for pod list to return data ...
	I1120 20:54:00.388517  323157 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:54:00.391628  323157 default_sa.go:45] found service account: "default"
	I1120 20:54:00.391650  323157 default_sa.go:55] duration metric: took 3.127505ms for default service account to be created ...
	I1120 20:54:00.391659  323157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:54:00.397448  323157 system_pods.go:86] 26 kube-system pods found
	I1120 20:54:00.397474  323157 system_pods.go:89] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:00.397480  323157 system_pods.go:89] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:00.397484  323157 system_pods.go:89] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:00.397487  323157 system_pods.go:89] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:00.397491  323157 system_pods.go:89] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running
	I1120 20:54:00.397495  323157 system_pods.go:89] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:00.397498  323157 system_pods.go:89] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:00.397501  323157 system_pods.go:89] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:00.397507  323157 system_pods.go:89] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:00.397515  323157 system_pods.go:89] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:00.397519  323157 system_pods.go:89] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:00.397523  323157 system_pods.go:89] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running
	I1120 20:54:00.397528  323157 system_pods.go:89] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:00.397534  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:00.397537  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running
	I1120 20:54:00.397542  323157 system_pods.go:89] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:00.397546  323157 system_pods.go:89] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:00.397550  323157 system_pods.go:89] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:00.397553  323157 system_pods.go:89] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:00.397556  323157 system_pods.go:89] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:00.397559  323157 system_pods.go:89] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:00.397564  323157 system_pods.go:89] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running
	I1120 20:54:00.397567  323157 system_pods.go:89] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:00.397569  323157 system_pods.go:89] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:00.397574  323157 system_pods.go:89] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:00.397577  323157 system_pods.go:89] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:00.397584  323157 system_pods.go:126] duration metric: took 5.920412ms to wait for k8s-apps to be running ...
	I1120 20:54:00.397590  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:00.397634  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:00.411201  323157 system_svc.go:56] duration metric: took 13.597746ms WaitForService to wait for kubelet
	I1120 20:54:00.411248  323157 kubeadm.go:587] duration metric: took 8.677048036s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:00.411276  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:00.415079  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415110  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415124  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415127  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415131  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415134  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415137  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:00.415140  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:00.415143  323157 node_conditions.go:105] duration metric: took 3.862735ms to run NodePressure ...
	I1120 20:54:00.415156  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:00.415179  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:00.416940  323157 out.go:203] 
	I1120 20:54:00.418262  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:00.418361  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.420050  323157 out.go:179] * Starting "ha-922218-m03" control-plane node in "ha-922218" cluster
	I1120 20:54:00.421459  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:54:00.422633  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:54:00.423753  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:54:00.423776  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:54:00.423854  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:54:00.423922  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:54:00.423940  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:54:00.424083  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.445274  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:54:00.445296  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:54:00.445313  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:54:00.445346  323157 start.go:360] acquireMachinesLock for ha-922218-m03: {Name:mk2f097c0ed961dc411b64ff8718e82c63bed499 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:54:00.445404  323157 start.go:364] duration metric: took 37.644µs to acquireMachinesLock for "ha-922218-m03"
	I1120 20:54:00.445429  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:54:00.445440  323157 fix.go:54] fixHost starting: m03
	I1120 20:54:00.445721  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:54:00.464059  323157 fix.go:112] recreateIfNeeded on ha-922218-m03: state=Stopped err=<nil>
	W1120 20:54:00.464096  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:54:00.465782  323157 out.go:252] * Restarting existing docker container for "ha-922218-m03" ...
	I1120 20:54:00.465877  323157 cli_runner.go:164] Run: docker start ha-922218-m03
	I1120 20:54:00.752312  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:54:00.772989  323157 kic.go:430] container "ha-922218-m03" state is running.
	I1120 20:54:00.773519  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:00.792599  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:00.792864  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:54:00.792955  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:00.811862  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:00.812107  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:00.812122  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:54:00.812859  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51650->127.0.0.1:32818: read: connection reset by peer
	I1120 20:54:03.944569  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m03
	
	I1120 20:54:03.944604  323157 ubuntu.go:182] provisioning hostname "ha-922218-m03"
	I1120 20:54:03.944668  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:03.962694  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:03.962979  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:03.963001  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m03 && echo "ha-922218-m03" | sudo tee /etc/hostname
	I1120 20:54:04.105497  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m03
	
	I1120 20:54:04.105607  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.123058  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:04.123306  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:04.123324  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:54:04.258245  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:54:04.258278  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:54:04.258296  323157 ubuntu.go:190] setting up certificates
	I1120 20:54:04.258308  323157 provision.go:84] configureAuth start
	I1120 20:54:04.258362  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:04.279610  323157 provision.go:143] copyHostCerts
	I1120 20:54:04.279658  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:04.279700  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:54:04.279713  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:04.279830  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:54:04.279954  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:04.279983  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:54:04.279994  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:04.280037  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:54:04.280114  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:04.280137  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:54:04.280143  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:04.280182  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:54:04.280275  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m03 san=[127.0.0.1 192.168.49.4 ha-922218-m03 localhost minikube]
	I1120 20:54:04.594873  323157 provision.go:177] copyRemoteCerts
	I1120 20:54:04.594949  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:54:04.595006  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.620652  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:04.724930  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:54:04.724996  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:54:04.744735  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:54:04.744808  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:54:04.767156  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:54:04.767237  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:54:04.786208  323157 provision.go:87] duration metric: took 527.885771ms to configureAuth
	I1120 20:54:04.786260  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:54:04.786486  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:04.786596  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:04.804998  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:04.805211  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1120 20:54:04.805245  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:54:05.142154  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:54:05.142184  323157 machine.go:97] duration metric: took 4.349303942s to provisionDockerMachine
	I1120 20:54:05.142196  323157 start.go:293] postStartSetup for "ha-922218-m03" (driver="docker")
	I1120 20:54:05.142207  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:54:05.142302  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:54:05.142352  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.161336  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.258512  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:54:05.262505  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:54:05.262541  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:54:05.262557  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:54:05.262619  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:54:05.262714  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:54:05.262726  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:54:05.262809  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:54:05.270992  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:05.290254  323157 start.go:296] duration metric: took 148.013138ms for postStartSetup
	I1120 20:54:05.290349  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:54:05.290395  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.312238  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.418404  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:54:05.424662  323157 fix.go:56] duration metric: took 4.979214262s for fixHost
	I1120 20:54:05.424693  323157 start.go:83] releasing machines lock for "ha-922218-m03", held for 4.979275228s
	I1120 20:54:05.424774  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:54:05.448969  323157 out.go:179] * Found network options:
	I1120 20:54:05.450451  323157 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1120 20:54:05.453201  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453264  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453295  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:05.453313  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:54:05.453406  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:54:05.453469  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.453486  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:54:05.453555  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:54:05.475420  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.475725  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:54:05.630989  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:54:05.636113  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:54:05.636175  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:54:05.644977  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:54:05.645012  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:54:05.645047  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:54:05.645097  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:54:05.661262  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:54:05.674425  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:54:05.674494  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:54:05.689725  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:54:05.702759  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:54:05.825858  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:54:05.942569  323157 docker.go:234] disabling docker service ...
	I1120 20:54:05.942658  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:54:05.958482  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:54:05.972123  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:54:06.094822  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:54:06.215707  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:54:06.229448  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:54:06.245084  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:54:06.245154  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.254965  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:54:06.255020  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.265259  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.275476  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.285519  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:54:06.294777  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.304916  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.313870  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:06.322957  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:54:06.330497  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:54:06.338069  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:06.450575  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:54:06.648124  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:54:06.648243  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:54:06.653061  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:54:06.653129  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:54:06.657494  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:54:06.699746  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:54:06.699846  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:06.736255  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:06.768946  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:54:06.770257  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:54:06.771411  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 20:54:06.772594  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:54:06.792494  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:54:06.797451  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:06.810322  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:54:06.810733  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:06.811056  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:54:06.832939  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:54:06.833235  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.4
	I1120 20:54:06.833251  323157 certs.go:195] generating shared ca certs ...
	I1120 20:54:06.833270  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:54:06.833418  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:54:06.833458  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:54:06.833467  323157 certs.go:257] generating profile certs ...
	I1120 20:54:06.833538  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key
	I1120 20:54:06.833595  323157 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key.8321a6cf
	I1120 20:54:06.833629  323157 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key
	I1120 20:54:06.833641  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:54:06.833655  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:54:06.833667  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:54:06.833679  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:54:06.833691  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1120 20:54:06.833704  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1120 20:54:06.833716  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1120 20:54:06.833730  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1120 20:54:06.833780  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:54:06.833808  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:54:06.833818  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:54:06.833838  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:54:06.833859  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:54:06.833880  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:54:06.833917  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:06.833947  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:54:06.833959  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:54:06.833973  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:06.834021  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:54:06.855612  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:54:06.947569  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1120 20:54:06.951943  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1120 20:54:06.960328  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1120 20:54:06.963907  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1120 20:54:06.972305  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1120 20:54:06.975879  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1120 20:54:06.984275  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1120 20:54:06.987841  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1120 20:54:06.995987  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1120 20:54:06.999744  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1120 20:54:07.008281  323157 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1120 20:54:07.011963  323157 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1120 20:54:07.020131  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:54:07.038787  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:54:07.058870  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:54:07.076347  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:54:07.093829  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 20:54:07.111361  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 20:54:07.133151  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 20:54:07.155916  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 20:54:07.176755  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:54:07.200109  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:54:07.222203  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:54:07.243966  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1120 20:54:07.260671  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1120 20:54:07.277366  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1120 20:54:07.293185  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1120 20:54:07.309452  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1120 20:54:07.324432  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1120 20:54:07.339188  323157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1120 20:54:07.353766  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:54:07.359885  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.367247  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:54:07.374693  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.378281  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.378337  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:54:07.415439  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:54:07.423662  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.431392  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:54:07.439351  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.442939  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.442985  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:07.477391  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:54:07.485472  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.493309  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:54:07.500900  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.504615  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.504678  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:54:07.540459  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:54:07.548510  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:54:07.552608  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 20:54:07.587157  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 20:54:07.623309  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 20:54:07.659308  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 20:54:07.694048  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 20:54:07.730482  323157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 20:54:07.766483  323157 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1120 20:54:07.766598  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:54:07.766625  323157 kube-vip.go:115] generating kube-vip config ...
	I1120 20:54:07.766666  323157 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1120 20:54:07.780008  323157 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:54:07.780076  323157 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1120 20:54:07.780149  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:54:07.788134  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:54:07.788227  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1120 20:54:07.796010  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:54:07.808930  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:54:07.821862  323157 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1120 20:54:07.834855  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:54:07.838597  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:07.850360  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:07.963081  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:07.976660  323157 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 20:54:07.976968  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:07.979321  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:54:07.980344  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:08.088528  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:08.102382  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:54:08.102458  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:54:08.102723  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m03" to be "Ready" ...
	I1120 20:54:08.105908  323157 node_ready.go:49] node "ha-922218-m03" is "Ready"
	I1120 20:54:08.105930  323157 node_ready.go:38] duration metric: took 3.189835ms for node "ha-922218-m03" to be "Ready" ...
	I1120 20:54:08.105943  323157 api_server.go:52] waiting for apiserver process to appear ...
	I1120 20:54:08.105984  323157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:54:08.117937  323157 api_server.go:72] duration metric: took 141.218493ms to wait for apiserver process to appear ...
	I1120 20:54:08.117959  323157 api_server.go:88] waiting for apiserver healthz status ...
	I1120 20:54:08.117974  323157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1120 20:54:08.122063  323157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1120 20:54:08.123003  323157 api_server.go:141] control plane version: v1.34.1
	I1120 20:54:08.123025  323157 api_server.go:131] duration metric: took 5.061002ms to wait for apiserver health ...
	I1120 20:54:08.123033  323157 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 20:54:08.128879  323157 system_pods.go:59] 26 kube-system pods found
	I1120 20:54:08.128913  323157 system_pods.go:61] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:08.128922  323157 system_pods.go:61] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:08.128934  323157 system_pods.go:61] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:08.128940  323157 system_pods.go:61] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:08.128953  323157 system_pods.go:61] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:54:08.128958  323157 system_pods.go:61] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:08.128965  323157 system_pods.go:61] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:08.128968  323157 system_pods.go:61] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:08.128973  323157 system_pods.go:61] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:08.128980  323157 system_pods.go:61] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:08.128984  323157 system_pods.go:61] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:08.128988  323157 system_pods.go:61] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:54:08.128993  323157 system_pods.go:61] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:08.128997  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:08.129005  323157 system_pods.go:61] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:54:08.129009  323157 system_pods.go:61] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:08.129016  323157 system_pods.go:61] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:08.129020  323157 system_pods.go:61] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:08.129026  323157 system_pods.go:61] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:08.129029  323157 system_pods.go:61] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:08.129032  323157 system_pods.go:61] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:08.129036  323157 system_pods.go:61] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:54:08.129042  323157 system_pods.go:61] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:08.129045  323157 system_pods.go:61] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:08.129047  323157 system_pods.go:61] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:08.129050  323157 system_pods.go:61] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:08.129056  323157 system_pods.go:74] duration metric: took 6.018012ms to wait for pod list to return data ...
	I1120 20:54:08.129064  323157 default_sa.go:34] waiting for default service account to be created ...
	I1120 20:54:08.131679  323157 default_sa.go:45] found service account: "default"
	I1120 20:54:08.131697  323157 default_sa.go:55] duration metric: took 2.627778ms for default service account to be created ...
	I1120 20:54:08.131713  323157 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 20:54:08.136580  323157 system_pods.go:86] 26 kube-system pods found
	I1120 20:54:08.136605  323157 system_pods.go:89] "coredns-66bc5c9577-2msz7" [4e6c84e9-3590-4bc3-8084-22c659e84a9f] Running
	I1120 20:54:08.136610  323157 system_pods.go:89] "coredns-66bc5c9577-kd4l6" [f94b2b80-dc09-4f6f-b101-31f031967f88] Running
	I1120 20:54:08.136614  323157 system_pods.go:89] "etcd-ha-922218" [9d16c53e-e5df-472a-af11-c22fd0147e9d] Running
	I1120 20:54:08.136617  323157 system_pods.go:89] "etcd-ha-922218-m02" [6f61e818-1abc-4bed-8a2c-3c2d4467a1ee] Running
	I1120 20:54:08.136625  323157 system_pods.go:89] "etcd-ha-922218-m03" [2e460128-5db1-4359-951c-f3f02367389f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 20:54:08.136629  323157 system_pods.go:89] "kindnet-8ql4z" [6afa7744-c6f4-4db1-9030-b1801dcd49ef] Running
	I1120 20:54:08.136637  323157 system_pods.go:89] "kindnet-f6wtm" [319a35be-d837-443f-9469-651c49930906] Running
	I1120 20:54:08.136642  323157 system_pods.go:89] "kindnet-q78zp" [66150fd3-5bc6-466f-afa3-ce752d345b21] Running
	I1120 20:54:08.136647  323157 system_pods.go:89] "kindnet-xhlv4" [2622c42c-7982-4f81-b5ac-a6c38b49c6e4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 20:54:08.136652  323157 system_pods.go:89] "kube-apiserver-ha-922218" [b13299a9-c9d1-4931-ad3d-09de93aa150d] Running
	I1120 20:54:08.136656  323157 system_pods.go:89] "kube-apiserver-ha-922218-m02" [3899b637-536e-4dcd-bba3-14a2cef7416a] Running
	I1120 20:54:08.136661  323157 system_pods.go:89] "kube-apiserver-ha-922218-m03" [d9120c8d-9e02-4e39-8a4e-34562fd66499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 20:54:08.136666  323157 system_pods.go:89] "kube-controller-manager-ha-922218" [09fe7ec3-0ac4-4ec9-8a30-b3f22f31a786] Running
	I1120 20:54:08.136670  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m02" [222bd190-000b-4cf2-957e-2221c26b6fcf] Running
	I1120 20:54:08.136676  323157 system_pods.go:89] "kube-controller-manager-ha-922218-m03" [14e171b8-9022-43e7-953a-452df4686016] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 20:54:08.136680  323157 system_pods.go:89] "kube-proxy-4cpch" [e388fb28-8fd4-46bb-9d99-1a0b5a9dd561] Running
	I1120 20:54:08.136685  323157 system_pods.go:89] "kube-proxy-hjm8j" [f371644c-4f44-4133-92a3-e6f177be790f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 20:54:08.136689  323157 system_pods.go:89] "kube-proxy-hz8k6" [8812ef99-b109-4a62-82ea-97c0c8876750] Running
	I1120 20:54:08.136693  323157 system_pods.go:89] "kube-proxy-vqk4x" [1fe1dcc4-b41f-4da1-894e-e7fe935e3e63] Running
	I1120 20:54:08.136696  323157 system_pods.go:89] "kube-scheduler-ha-922218" [6c6ed775-bc09-466c-8cc3-2e39cef71c4b] Running
	I1120 20:54:08.136710  323157 system_pods.go:89] "kube-scheduler-ha-922218-m02" [ec376ef9-6be9-4dc7-afc7-d2b1ac98b8da] Running
	I1120 20:54:08.136718  323157 system_pods.go:89] "kube-scheduler-ha-922218-m03" [7885891d-f4c8-47b4-aa62-65007262939c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 20:54:08.136721  323157 system_pods.go:89] "kube-vip-ha-922218" [9fee8f89-18c6-4cb5-b3eb-e32e184ed044] Running
	I1120 20:54:08.136724  323157 system_pods.go:89] "kube-vip-ha-922218-m02" [2c94c581-8b2c-4937-af9f-01f6a46126c2] Running
	I1120 20:54:08.136727  323157 system_pods.go:89] "kube-vip-ha-922218-m03" [32b0567a-d92d-444c-8518-2eb228684fb4] Running
	I1120 20:54:08.136730  323157 system_pods.go:89] "storage-provisioner" [ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6] Running
	I1120 20:54:08.136739  323157 system_pods.go:126] duration metric: took 5.020694ms to wait for k8s-apps to be running ...
	I1120 20:54:08.136745  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:08.136787  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:08.150040  323157 system_svc.go:56] duration metric: took 13.283775ms WaitForService to wait for kubelet
	I1120 20:54:08.150069  323157 kubeadm.go:587] duration metric: took 173.353654ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:08.150089  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:08.153814  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153839  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153854  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153860  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153866  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153871  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153876  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:08.153888  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:08.153894  323157 node_conditions.go:105] duration metric: took 3.799942ms to run NodePressure ...
	I1120 20:54:08.153910  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:08.153941  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:08.155986  323157 out.go:203] 
	I1120 20:54:08.157318  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:08.157412  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.158743  323157 out.go:179] * Starting "ha-922218-m04" worker node in "ha-922218" cluster
	I1120 20:54:08.159836  323157 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:54:08.160869  323157 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:54:08.161862  323157 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:54:08.161877  323157 cache.go:65] Caching tarball of preloaded images
	I1120 20:54:08.161937  323157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:54:08.161978  323157 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 20:54:08.161992  323157 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 20:54:08.162094  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.182859  323157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 20:54:08.182880  323157 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 20:54:08.182897  323157 cache.go:243] Successfully downloaded all kic artifacts
	I1120 20:54:08.182927  323157 start.go:360] acquireMachinesLock for ha-922218-m04: {Name:mk1c4e4c260415277383e4e2d7891bdf9d980713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 20:54:08.182984  323157 start.go:364] duration metric: took 40.112µs to acquireMachinesLock for "ha-922218-m04"
	I1120 20:54:08.183005  323157 start.go:96] Skipping create...Using existing machine configuration
	I1120 20:54:08.183013  323157 fix.go:54] fixHost starting: m04
	I1120 20:54:08.183210  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:54:08.201956  323157 fix.go:112] recreateIfNeeded on ha-922218-m04: state=Stopped err=<nil>
	W1120 20:54:08.201985  323157 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 20:54:08.203921  323157 out.go:252] * Restarting existing docker container for "ha-922218-m04" ...
	I1120 20:54:08.203990  323157 cli_runner.go:164] Run: docker start ha-922218-m04
	I1120 20:54:08.500882  323157 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:54:08.520205  323157 kic.go:430] container "ha-922218-m04" state is running.
	I1120 20:54:08.520698  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:08.539598  323157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/config.json ...
	I1120 20:54:08.539924  323157 machine.go:94] provisionDockerMachine start ...
	I1120 20:54:08.540000  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:08.558817  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:08.559028  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:08.559039  323157 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 20:54:08.559647  323157 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56832->127.0.0.1:32823: read: connection reset by peer
	I1120 20:54:11.694470  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m04
	
	I1120 20:54:11.694498  323157 ubuntu.go:182] provisioning hostname "ha-922218-m04"
	I1120 20:54:11.694556  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:11.713721  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:11.714041  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:11.714063  323157 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-922218-m04 && echo "ha-922218-m04" | sudo tee /etc/hostname
	I1120 20:54:11.857712  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-922218-m04
	
	I1120 20:54:11.857805  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:11.876191  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:11.876435  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:11.876453  323157 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922218-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922218-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922218-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 20:54:12.008064  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 20:54:12.008105  323157 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 20:54:12.008131  323157 ubuntu.go:190] setting up certificates
	I1120 20:54:12.008149  323157 provision.go:84] configureAuth start
	I1120 20:54:12.008245  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:12.026345  323157 provision.go:143] copyHostCerts
	I1120 20:54:12.026390  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:12.026424  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 20:54:12.026431  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 20:54:12.026501  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 20:54:12.026600  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:12.026623  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 20:54:12.026630  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 20:54:12.026671  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 20:54:12.026742  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:12.026767  323157 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 20:54:12.026776  323157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 20:54:12.026803  323157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 20:54:12.026878  323157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.ha-922218-m04 san=[127.0.0.1 192.168.49.5 ha-922218-m04 localhost minikube]
	I1120 20:54:12.101540  323157 provision.go:177] copyRemoteCerts
	I1120 20:54:12.101615  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 20:54:12.101661  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.120979  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.218812  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 20:54:12.218866  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 20:54:12.237906  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 20:54:12.237973  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1120 20:54:12.256242  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 20:54:12.256298  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 20:54:12.274435  323157 provision.go:87] duration metric: took 266.26509ms to configureAuth
	I1120 20:54:12.274472  323157 ubuntu.go:206] setting minikube options for container-runtime
	I1120 20:54:12.274774  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:12.274937  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.294444  323157 main.go:143] libmachine: Using SSH client type: native
	I1120 20:54:12.294713  323157 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1120 20:54:12.294742  323157 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 20:54:12.585665  323157 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 20:54:12.585696  323157 machine.go:97] duration metric: took 4.045752536s to provisionDockerMachine
	I1120 20:54:12.585712  323157 start.go:293] postStartSetup for "ha-922218-m04" (driver="docker")
	I1120 20:54:12.585734  323157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 20:54:12.585814  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 20:54:12.585872  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.604768  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.701189  323157 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 20:54:12.705103  323157 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 20:54:12.705131  323157 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 20:54:12.705142  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 20:54:12.705203  323157 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 20:54:12.705316  323157 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 20:54:12.705328  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 20:54:12.705436  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 20:54:12.713808  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:12.733683  323157 start.go:296] duration metric: took 147.949948ms for postStartSetup
	I1120 20:54:12.733781  323157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:54:12.733836  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.752642  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.846722  323157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 20:54:12.851576  323157 fix.go:56] duration metric: took 4.668555957s for fixHost
	I1120 20:54:12.851609  323157 start.go:83] releasing machines lock for "ha-922218-m04", held for 4.668610463s
	I1120 20:54:12.851688  323157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:54:12.872067  323157 out.go:179] * Found network options:
	I1120 20:54:12.873523  323157 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1120 20:54:12.874579  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874614  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874623  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874645  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874656  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	W1120 20:54:12.874666  323157 proxy.go:120] fail to check proxy env: Error ip not in block
	I1120 20:54:12.874743  323157 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 20:54:12.874790  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.874801  323157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 20:54:12.874864  323157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:54:12.894599  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:12.894599  323157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:54:13.046495  323157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 20:54:13.051522  323157 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 20:54:13.051600  323157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 20:54:13.060371  323157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 20:54:13.060402  323157 start.go:496] detecting cgroup driver to use...
	I1120 20:54:13.060441  323157 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 20:54:13.060496  323157 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 20:54:13.075603  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 20:54:13.089123  323157 docker.go:218] disabling cri-docker service (if available) ...
	I1120 20:54:13.089184  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 20:54:13.104495  323157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 20:54:13.117935  323157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 20:54:13.204636  323157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 20:54:13.289453  323157 docker.go:234] disabling docker service ...
	I1120 20:54:13.289527  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 20:54:13.304738  323157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 20:54:13.317782  323157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 20:54:13.405405  323157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 20:54:13.491709  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 20:54:13.504420  323157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 20:54:13.519371  323157 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 20:54:13.519439  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.528469  323157 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 20:54:13.528520  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.537935  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.546887  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.555908  323157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 20:54:13.564139  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.573055  323157 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.581595  323157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 20:54:13.590695  323157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 20:54:13.597950  323157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 20:54:13.605162  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:13.690911  323157 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 20:54:13.836871  323157 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 20:54:13.836951  323157 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 20:54:13.841421  323157 start.go:564] Will wait 60s for crictl version
	I1120 20:54:13.841486  323157 ssh_runner.go:195] Run: which crictl
	I1120 20:54:13.846169  323157 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 20:54:13.871670  323157 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 20:54:13.871776  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:13.899765  323157 ssh_runner.go:195] Run: crio --version
	I1120 20:54:13.930597  323157 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 20:54:13.931748  323157 out.go:179]   - env NO_PROXY=192.168.49.2
	I1120 20:54:13.932757  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1120 20:54:13.933675  323157 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1120 20:54:13.934705  323157 cli_runner.go:164] Run: docker network inspect ha-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 20:54:13.952693  323157 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1120 20:54:13.957363  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:13.968716  323157 mustload.go:66] Loading cluster: ha-922218
	I1120 20:54:13.969001  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:13.969254  323157 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:54:13.988111  323157 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:54:13.988373  323157 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218 for IP: 192.168.49.5
	I1120 20:54:13.988385  323157 certs.go:195] generating shared ca certs ...
	I1120 20:54:13.988399  323157 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 20:54:13.988540  323157 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 20:54:13.988575  323157 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 20:54:13.988589  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1120 20:54:13.988603  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1120 20:54:13.988615  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1120 20:54:13.988628  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1120 20:54:13.988691  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 20:54:13.988719  323157 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 20:54:13.988729  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 20:54:13.988750  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 20:54:13.988771  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 20:54:13.988792  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 20:54:13.988827  323157 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 20:54:13.988853  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:13.988866  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem -> /usr/share/ca-certificates/254094.pem
	I1120 20:54:13.988881  323157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /usr/share/ca-certificates/2540942.pem
	I1120 20:54:13.988902  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 20:54:14.007643  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 20:54:14.026465  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 20:54:14.045259  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 20:54:14.064924  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 20:54:14.083817  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 20:54:14.101377  323157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 20:54:14.119564  323157 ssh_runner.go:195] Run: openssl version
	I1120 20:54:14.126329  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.134374  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 20:54:14.142273  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.146139  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.146194  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 20:54:14.182277  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 20:54:14.190606  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.198830  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 20:54:14.206817  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.210855  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.210906  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 20:54:14.245946  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 20:54:14.254083  323157 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.261737  323157 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 20:54:14.269638  323157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.273524  323157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.273580  323157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 20:54:14.308064  323157 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 20:54:14.316236  323157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 20:54:14.320194  323157 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 20:54:14.320268  323157 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1120 20:54:14.320379  323157 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-922218-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-922218 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 20:54:14.320454  323157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 20:54:14.328815  323157 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 20:54:14.328872  323157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1120 20:54:14.336516  323157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1120 20:54:14.349467  323157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 20:54:14.362001  323157 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1120 20:54:14.365657  323157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 20:54:14.375549  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:14.458116  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:14.472066  323157 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1120 20:54:14.472382  323157 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:54:14.474034  323157 out.go:179] * Verifying Kubernetes components...
	I1120 20:54:14.474976  323157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 20:54:14.559289  323157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 20:54:14.572777  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1120 20:54:14.572849  323157 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1120 20:54:14.573080  323157 node_ready.go:35] waiting up to 6m0s for node "ha-922218-m04" to be "Ready" ...
	W1120 20:54:16.576678  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	W1120 20:54:19.076525  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	W1120 20:54:21.078346  323157 node_ready.go:57] node "ha-922218-m04" has "Ready":"Unknown" status (will retry)
	I1120 20:54:22.076345  323157 node_ready.go:49] node "ha-922218-m04" is "Ready"
	I1120 20:54:22.076377  323157 node_ready.go:38] duration metric: took 7.503280123s for node "ha-922218-m04" to be "Ready" ...
	I1120 20:54:22.076397  323157 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 20:54:22.076458  323157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:54:22.089909  323157 system_svc.go:56] duration metric: took 13.491851ms WaitForService to wait for kubelet
	I1120 20:54:22.089941  323157 kubeadm.go:587] duration metric: took 7.617823089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 20:54:22.089966  323157 node_conditions.go:102] verifying NodePressure condition ...
	I1120 20:54:22.093121  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093142  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093154  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093158  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093161  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093165  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093170  323157 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 20:54:22.093175  323157 node_conditions.go:123] node cpu capacity is 8
	I1120 20:54:22.093180  323157 node_conditions.go:105] duration metric: took 3.207725ms to run NodePressure ...
	I1120 20:54:22.093197  323157 start.go:242] waiting for startup goroutines ...
	I1120 20:54:22.093255  323157 start.go:256] writing updated cluster config ...
	I1120 20:54:22.093568  323157 ssh_runner.go:195] Run: rm -f paused
	I1120 20:54:22.097398  323157 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:54:22.097827  323157 kapi.go:59] client config for ha-922218: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/ha-922218/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 20:54:22.109570  323157 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2msz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.119449  323157 pod_ready.go:94] pod "coredns-66bc5c9577-2msz7" is "Ready"
	I1120 20:54:22.119483  323157 pod_ready.go:86] duration metric: took 9.881192ms for pod "coredns-66bc5c9577-2msz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.119494  323157 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kd4l6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.131629  323157 pod_ready.go:94] pod "coredns-66bc5c9577-kd4l6" is "Ready"
	I1120 20:54:22.131656  323157 pod_ready.go:86] duration metric: took 12.154214ms for pod "coredns-66bc5c9577-kd4l6" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.134158  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.138697  323157 pod_ready.go:94] pod "etcd-ha-922218" is "Ready"
	I1120 20:54:22.138722  323157 pod_ready.go:86] duration metric: took 4.537439ms for pod "etcd-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.138729  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.142874  323157 pod_ready.go:94] pod "etcd-ha-922218-m02" is "Ready"
	I1120 20:54:22.142900  323157 pod_ready.go:86] duration metric: took 4.166255ms for pod "etcd-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.142909  323157 pod_ready.go:83] waiting for pod "etcd-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.298304  323157 request.go:683] "Waited before sending request" delay="155.234553ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-922218-m03"
	I1120 20:54:22.498845  323157 request.go:683] "Waited before sending request" delay="197.338738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:22.501969  323157 pod_ready.go:94] pod "etcd-ha-922218-m03" is "Ready"
	I1120 20:54:22.502000  323157 pod_ready.go:86] duration metric: took 359.082878ms for pod "etcd-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.698517  323157 request.go:683] "Waited before sending request" delay="196.343264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1120 20:54:22.702321  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:22.898835  323157 request.go:683] "Waited before sending request" delay="196.37899ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218"
	I1120 20:54:23.098414  323157 request.go:683] "Waited before sending request" delay="196.292789ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218"
	I1120 20:54:23.101586  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218" is "Ready"
	I1120 20:54:23.101613  323157 pod_ready.go:86] duration metric: took 399.267945ms for pod "kube-apiserver-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.101634  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.299099  323157 request.go:683] "Waited before sending request" delay="197.354769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218-m02"
	I1120 20:54:23.498968  323157 request.go:683] "Waited before sending request" delay="196.361911ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:23.502012  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218-m02" is "Ready"
	I1120 20:54:23.502037  323157 pod_ready.go:86] duration metric: took 400.398297ms for pod "kube-apiserver-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.502045  323157 pod_ready.go:83] waiting for pod "kube-apiserver-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:23.698407  323157 request.go:683] "Waited before sending request" delay="196.284088ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-922218-m03"
	I1120 20:54:23.899090  323157 request.go:683] "Waited before sending request" delay="197.347334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:23.902334  323157 pod_ready.go:94] pod "kube-apiserver-ha-922218-m03" is "Ready"
	I1120 20:54:23.902359  323157 pod_ready.go:86] duration metric: took 400.308088ms for pod "kube-apiserver-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.098830  323157 request.go:683] "Waited before sending request" delay="196.34133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1120 20:54:24.102694  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.299178  323157 request.go:683] "Waited before sending request" delay="196.360417ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218"
	I1120 20:54:24.499104  323157 request.go:683] "Waited before sending request" delay="196.347724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218"
	I1120 20:54:24.502309  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218" is "Ready"
	I1120 20:54:24.502336  323157 pod_ready.go:86] duration metric: took 399.617093ms for pod "kube-controller-manager-ha-922218" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.502348  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.698782  323157 request.go:683] "Waited before sending request" delay="196.335349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218-m02"
	I1120 20:54:24.898597  323157 request.go:683] "Waited before sending request" delay="196.345917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:24.901960  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218-m02" is "Ready"
	I1120 20:54:24.901992  323157 pod_ready.go:86] duration metric: took 399.637685ms for pod "kube-controller-manager-ha-922218-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:24.902001  323157 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.098365  323157 request.go:683] "Waited before sending request" delay="196.278218ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-922218-m03"
	I1120 20:54:25.299280  323157 request.go:683] "Waited before sending request" delay="197.379888ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:25.302430  323157 pod_ready.go:94] pod "kube-controller-manager-ha-922218-m03" is "Ready"
	I1120 20:54:25.302455  323157 pod_ready.go:86] duration metric: took 400.448425ms for pod "kube-controller-manager-ha-922218-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.498873  323157 request.go:683] "Waited before sending request" delay="196.293203ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1120 20:54:25.502860  323157 pod_ready.go:83] waiting for pod "kube-proxy-4cpch" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.698288  323157 request.go:683] "Waited before sending request" delay="195.281134ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cpch"
	I1120 20:54:25.898934  323157 request.go:683] "Waited before sending request" delay="197.356231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m03"
	I1120 20:54:25.902128  323157 pod_ready.go:94] pod "kube-proxy-4cpch" is "Ready"
	I1120 20:54:25.902154  323157 pod_ready.go:86] duration metric: took 399.270347ms for pod "kube-proxy-4cpch" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:25.902162  323157 pod_ready.go:83] waiting for pod "kube-proxy-hjm8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 20:54:26.098606  323157 request.go:683] "Waited before sending request" delay="196.346655ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjm8j"
	I1120 20:54:26.299163  323157 request.go:683] "Waited before sending request" delay="197.345494ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:26.498649  323157 request.go:683] "Waited before sending request" delay="96.287539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hjm8j"
	I1120 20:54:26.699151  323157 request.go:683] "Waited before sending request" delay="197.392783ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:27.098399  323157 request.go:683] "Waited before sending request" delay="192.27694ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	I1120 20:54:27.498455  323157 request.go:683] "Waited before sending request" delay="92.237627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-922218-m02"
	W1120 20:54:27.908326  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:29.909034  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:32.408730  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:34.908689  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:37.408823  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:39.908861  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:42.408698  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:44.908702  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:47.408397  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:49.409469  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:51.908163  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:53.908996  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:56.408061  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:54:58.408624  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:00.908495  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:02.908955  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:05.408405  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:07.909016  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:10.408037  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:12.408417  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:14.908340  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:17.409065  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:19.908332  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:21.908889  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:24.408759  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:26.908929  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:29.408434  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:31.408588  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:33.409210  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:35.908636  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:37.909250  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:40.410051  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:42.909105  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:45.408430  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:47.408740  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:49.908450  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:52.409005  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:54.907859  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:56.908189  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:55:58.908541  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:00.909373  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:03.408429  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:05.408564  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:07.908140  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:09.908306  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:11.908938  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:14.408871  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:16.907877  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:18.907974  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:20.908614  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:23.408900  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:25.908472  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:28.408373  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:30.408570  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:32.408832  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:34.909276  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:37.408137  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:39.409076  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:41.409464  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:43.908812  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:46.408702  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:48.908615  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:51.408026  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:53.408283  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:55.408942  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:56:57.909263  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:00.408692  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:02.409101  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:04.907598  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:06.908152  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:08.909063  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:11.408240  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:13.908776  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:16.408622  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:18.908974  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:21.409451  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:23.908489  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:25.908547  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:27.909262  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:30.408274  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:32.409046  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:34.908267  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:37.408193  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:39.408371  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:41.908734  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:43.909408  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:46.408806  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:48.908938  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:51.408993  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:53.908365  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:55.908521  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:57:57.918887  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:00.408852  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:02.410752  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:04.909111  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:07.409095  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:09.908605  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:12.409207  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:14.409540  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:16.908532  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:18.909206  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	W1120 20:58:21.408380  323157 pod_ready.go:104] pod "kube-proxy-hjm8j" is not "Ready", error: <nil>
	I1120 20:58:22.098421  323157 pod_ready.go:86] duration metric: took 3m56.196242024s for pod "kube-proxy-hjm8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 20:58:22.098463  323157 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1120 20:58:22.098478  323157 pod_ready.go:40] duration metric: took 4m0.001055692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 20:58:22.100129  323157 out.go:203] 
	W1120 20:58:22.101328  323157 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1120 20:58:22.102425  323157 out.go:203] 
	
	
	==> CRI-O <==
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.179274516Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.179299573Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.179311849Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.183045155Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.183072535Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.18309208Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.18663601Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.186665457Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.186685532Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.190271629Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.190305928Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.190331869Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.193887848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 20:52:29 ha-922218 crio[584]: time="2025-11-20T20:52:29.193915839Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.344973428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2a8081bc-d00c-414c-a1b1-9cbbeb6545fc name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.34605763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=97d6402c-24a3-4d7a-a623-2f33e58951f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.347291242Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9641ec35-68ba-4c1b-9d66-1d5bd6212949 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.347491562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.352874189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.353102373Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc9d1e539cccfade6ddf8be32aa73b0c357fac2f392b0db94ea11587ea75a0d0/merged/etc/passwd: no such file or directory"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.35313636Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc9d1e539cccfade6ddf8be32aa73b0c357fac2f392b0db94ea11587ea75a0d0/merged/etc/group: no such file or directory"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.353871031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.386863103Z" level=info msg="Created container eb00966d573d1c10c59d3ed85f70753d14532543062430985fb23499c2323330: kube-system/storage-provisioner/storage-provisioner" id=9641ec35-68ba-4c1b-9d66-1d5bd6212949 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.387596765Z" level=info msg="Starting container: eb00966d573d1c10c59d3ed85f70753d14532543062430985fb23499c2323330" id=f71dc408-4a18-4daa-ad26-33c0ad73f76c name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 20:52:35 ha-922218 crio[584]: time="2025-11-20T20:52:35.389753227Z" level=info msg="Started container" PID=1426 containerID=eb00966d573d1c10c59d3ed85f70753d14532543062430985fb23499c2323330 description=kube-system/storage-provisioner/storage-provisioner id=f71dc408-4a18-4daa-ad26-33c0ad73f76c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d671375233375d3d75a3a3d4276bdfb8d5b7eec68c6b7eb4ea37982ea5c49d97
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	eb00966d573d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       2                   d671375233375       storage-provisioner                 kube-system
	fac2b6f885b5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       1                   d671375233375       storage-provisioner                 kube-system
	5394a253d0bd7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   cc5ea32e456a2       coredns-66bc5c9577-2msz7            kube-system
	48eb79307d4ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   911861e8c0958       coredns-66bc5c9577-kd4l6            kube-system
	ac1bee818f4ec       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   66b55ad69697f       busybox-7b57f96db7-58ttm            default
	2651827b33b3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 minutes ago       Running             kube-proxy                0                   a4f0d5ddfe85b       kube-proxy-vqk4x                    kube-system
	084c8a1dec078       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               0                   f408c5ac54d0a       kindnet-f6wtm                       kube-system
	65d1e3fad6b2d       ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38   6 minutes ago       Running             kube-vip                  0                   e151e1b9dd106       kube-vip-ha-922218                  kube-system
	8b6d87aa881c9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      0                   0cb5cedb8ce4c       etcd-ha-922218                      kube-system
	406607e74d161       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 minutes ago       Running             kube-controller-manager   0                   3539ecfac7323       kube-controller-manager-ha-922218   kube-system
	9e882a89de870       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 minutes ago       Running             kube-scheduler            0                   2aad348477526       kube-scheduler-ha-922218            kube-system
	45a868d0ee3cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 minutes ago       Running             kube-apiserver            0                   98c45e98892bd       kube-apiserver-ha-922218            kube-system
	
	
	==> coredns [48eb79307d4edc3ce53d60f38b8b610f913c0a64dfbb891e06a119e0d346362c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54940 - 19184 "HINFO IN 64422634499881743.2714493057616586485. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.019551872s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [5394a253d0bd78f37e15200679d74a85d3d2641b2cc3dfede103a3f6b42a4ea3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55011 - 16464 "HINFO IN 7317740798094180439.8936119426908172663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018307873s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-922218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T20_48_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:48:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:57:24 +0000   Thu, 20 Nov 2025 20:48:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-922218
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                e1d19c5d-2aa9-4c37-b403-bdef012a1c79
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-58ttm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 coredns-66bc5c9577-2msz7             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 coredns-66bc5c9577-kd4l6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-ha-922218                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-f6wtm                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-922218             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-922218    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vqk4x                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-922218             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-922218                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-922218 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-922218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-922218 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-922218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-922218 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-922218 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           10m                    node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  NodeReady                9m56s                  kubelet          Node ha-922218 status is now: NodeReady
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           9m1s                   node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           7m20s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  Starting                 6m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-922218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-922218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s (x8 over 6m24s)  kubelet          Node ha-922218 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-922218 event: Registered Node ha-922218 in Controller
	
	
	Name:               ha-922218-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T20_48_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:48:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:48:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:48:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:48:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:57:33 +0000   Thu, 20 Nov 2025 20:53:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-922218-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                0aac81f5-0fe4-4a48-b7c8-cca1476ce619
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rsl29                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 etcd-ha-922218-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m39s
	  kube-system                 kindnet-xhlv4                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m40s
	  kube-system                 kube-apiserver-ha-922218-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 kube-controller-manager-ha-922218-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 kube-proxy-hjm8j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 kube-scheduler-ha-922218-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 kube-vip-ha-922218-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m37s                  kube-proxy       
	  Normal   RegisteredNode           9m38s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   RegisteredNode           9m1s                   node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m26s (x8 over 7m26s)  kubelet          Node ha-922218-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m26s (x8 over 7m26s)  kubelet          Node ha-922218-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m26s (x8 over 7m26s)  kubelet          Node ha-922218-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   Starting                 6m22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-922218-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node ha-922218-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m22s (x8 over 6m22s)  kubelet          Node ha-922218-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m15s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Normal   RegisteredNode           6m15s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	  Warning  ContainerGCFailed        5m22s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-922218-m02 event: Registered Node ha-922218-m02 in Controller
	
	
	Name:               ha-922218-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-922218-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=ha-922218
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_20T20_50_17_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 20:50:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-922218-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 20:58:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 20:58:36 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 20:58:36 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 20:58:36 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 20:58:36 +0000   Thu, 20 Nov 2025 20:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-922218-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                dba8f593-ce38-490f-a745-3998b81f9342
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-trm9z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kindnet-q78zp               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m19s
	  kube-system                 kube-proxy-hz8k6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 8m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m19s (x3 over 8m19s)  kubelet          Node ha-922218-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m19s (x3 over 8m19s)  kubelet          Node ha-922218-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m19s (x3 over 8m19s)  kubelet          Node ha-922218-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           8m17s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           8m17s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  NodeReady                8m6s                   kubelet          Node ha-922218-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m20s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  NodeNotReady             5m25s                  node-controller  Node ha-922218-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-922218-m04 event: Registered Node ha-922218-m04 in Controller
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m24s (x8 over 4m28s)  kubelet          Node ha-922218-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x8 over 4m28s)  kubelet          Node ha-922218-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x8 over 4m28s)  kubelet          Node ha-922218-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	[Nov20 20:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.053479] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023936] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +2.047762] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +4.031673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +8.127416] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[ +16.382740] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 20:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	
	
	==> etcd [8b6d87aa881c9d7ce48cf020cc5a82bcd71165681bd09bdbef589896ef08b244] <==
	{"level":"info","ts":"2025-11-20T20:54:01.858402Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:54:01.865181Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:54:01.865266Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:54:02.958459Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5db77087b5cfd589","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T20:54:02.958492Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5db77087b5cfd589","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-20T20:58:28.077211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:58:28.093800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T20:58:28.101335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:51806","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T20:58:28.111247Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(2049575369545135139 12593026477526642892)"}
	{"level":"info","ts":"2025-11-20T20:58:28.112274Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5db77087b5cfd589","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-20T20:58:28.112322Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:58:28.112383Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:58:28.112417Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:58:28.112449Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:58:28.112456Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:58:28.112518Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:58:28.112662Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","error":"context canceled"}
	{"level":"warn","ts":"2025-11-20T20:58:28.112731Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5db77087b5cfd589","error":"failed to read 5db77087b5cfd589 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-11-20T20:58:28.112755Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:58:28.112863Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589","error":"context canceled"}
	{"level":"info","ts":"2025-11-20T20:58:28.112945Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:58:28.113009Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5db77087b5cfd589"}
	{"level":"info","ts":"2025-11-20T20:58:28.113046Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:58:28.119085Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"5db77087b5cfd589"}
	{"level":"warn","ts":"2025-11-20T20:58:28.120595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:50772","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:58:36 up  3:40,  0 user,  load average: 1.47, 0.97, 0.95
	Linux ha-922218 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [084c8a1dec0788f04fd73911d864fb9472875659cb154976c01f44fab76b9c71] <==
	I1120 20:57:59.170988       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	I1120 20:58:09.175188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:58:09.175257       1 main.go:301] handling current node
	I1120 20:58:09.175275       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 20:58:09.175282       1 main.go:324] Node ha-922218-m02 has CIDR [10.244.1.0/24] 
	I1120 20:58:09.175490       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 20:58:09.175500       1 main.go:324] Node ha-922218-m03 has CIDR [10.244.2.0/24] 
	I1120 20:58:09.175616       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 20:58:09.175626       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	I1120 20:58:19.172321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:58:19.172356       1 main.go:301] handling current node
	I1120 20:58:19.172371       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 20:58:19.172377       1 main.go:324] Node ha-922218-m02 has CIDR [10.244.1.0/24] 
	I1120 20:58:19.172541       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 20:58:19.172550       1 main.go:324] Node ha-922218-m03 has CIDR [10.244.2.0/24] 
	I1120 20:58:19.172630       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 20:58:19.172637       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	I1120 20:58:29.170259       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1120 20:58:29.170293       1 main.go:324] Node ha-922218-m04 has CIDR [10.244.3.0/24] 
	I1120 20:58:29.170483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1120 20:58:29.170495       1 main.go:301] handling current node
	I1120 20:58:29.170512       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1120 20:58:29.170520       1 main.go:324] Node ha-922218-m02 has CIDR [10.244.1.0/24] 
	I1120 20:58:29.170637       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1120 20:58:29.170647       1 main.go:324] Node ha-922218-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [45a868d0ee3cc88db4f8ceed46d0f4eddce85b589457dcbb93848dd871b099bf] <==
	I1120 20:52:18.209169       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 20:52:18.209633       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 20:52:18.209681       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 20:52:18.215582       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 20:52:18.215701       1 aggregator.go:171] initial CRD sync complete...
	I1120 20:52:18.215715       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 20:52:18.215722       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 20:52:18.215728       1 cache.go:39] Caches are synced for autoregister controller
	I1120 20:52:18.215793       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 20:52:18.216200       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 20:52:18.216254       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 20:52:18.227275       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	E1120 20:52:18.227303       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 20:52:18.227312       1 policy_source.go:240] refreshing policies
	I1120 20:52:18.274334       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 20:52:18.281102       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 20:52:18.464733       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 20:52:19.113191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1120 20:52:19.540161       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1120 20:52:19.541903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 20:52:19.548185       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 20:52:21.905119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:21.905119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 20:52:21.961328       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 20:52:50.429683       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [406607e74d1618ca02cbf22003052ea65983c0e1235732ec547478bff625b9ff] <==
	I1120 20:52:21.556473       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 20:52:21.557545       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 20:52:21.559723       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 20:52:21.559753       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 20:52:21.559790       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 20:52:21.559844       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 20:52:21.559853       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 20:52:21.559860       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 20:52:21.561946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 20:52:21.563127       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 20:52:21.565300       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 20:52:21.565324       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 20:52:21.569631       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 20:52:21.571824       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 20:52:21.575158       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 20:52:21.577485       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1120 20:52:21.578753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 20:52:21.579777       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 20:52:21.706277       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-c9k7m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-c9k7m\": the object has been modified; please apply your changes to the latest version and try again"
	I1120 20:52:21.706377       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9160caa3-ca5a-482c-9900-6b473b1ad059", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-c9k7m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-c9k7m": the object has been modified; please apply your changes to the latest version and try again
	I1120 20:52:28.494750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-922218-m04"
	I1120 20:53:11.511054       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1120 20:54:01.531547       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 20:54:21.706569       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-922218-m04"
	I1120 20:58:30.288842       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-922218-m04"
	
	
	==> kube-proxy [2651827b33b3dc42e63a885b5752ffd7b702afd0a4d5394a2196bddf43144ed2] <==
	I1120 20:52:18.767903       1 server_linux.go:53] "Using iptables proxy"
	I1120 20:52:18.833437       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1120 20:52:21.904740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-922218&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1120 20:52:23.333884       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 20:52:23.333927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1120 20:52:23.334020       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 20:52:23.353055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 20:52:23.353120       1 server_linux.go:132] "Using iptables Proxier"
	I1120 20:52:23.358578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 20:52:23.358901       1 server.go:527] "Version info" version="v1.34.1"
	I1120 20:52:23.358929       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:23.360415       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 20:52:23.360448       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 20:52:23.360457       1 config.go:106] "Starting endpoint slice config controller"
	I1120 20:52:23.360482       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 20:52:23.360493       1 config.go:200] "Starting service config controller"
	I1120 20:52:23.360512       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 20:52:23.360533       1 config.go:309] "Starting node config controller"
	I1120 20:52:23.360545       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 20:52:23.360553       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 20:52:23.460696       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 20:52:23.460696       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 20:52:23.460949       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9e882a89de870c006dd62af4f419f69f18af696b07ee1686b859a279092e03e0] <==
	I1120 20:52:13.226234       1 serving.go:386] Generated self-signed cert in-memory
	W1120 20:52:18.146292       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 20:52:18.146332       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 20:52:18.146353       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 20:52:18.146363       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 20:52:18.193971       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 20:52:18.194007       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 20:52:18.196860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:52:18.196907       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 20:52:18.197257       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 20:52:18.197345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 20:52:18.298064       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.266715     758 kubelet_node_status.go:124] "Node was previously registered" node="ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.266820     758 kubelet_node_status.go:78] "Successfully registered node" node="ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.266859     758 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.267754     758 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 20:52:18 ha-922218 kubelet[758]: E1120 20:52:18.275063     758 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-922218\" already exists" pod="kube-system/kube-controller-manager-ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.317394     758 apiserver.go:52] "Watching apiserver"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.321290     758 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-922218" podUID="c76017a7-d5b6-4722-a147-ee435bda3cdb"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.336233     758 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.336274     758 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-922218"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.345879     758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29385aad8e38316abef8aa5e6851d452" path="/var/lib/kubelet/pods/29385aad8e38316abef8aa5e6851d452/volumes"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.371273     758 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-922218" podUID="c76017a7-d5b6-4722-a147-ee435bda3cdb"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.402198     758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-922218" podStartSLOduration=0.402177345 podStartE2EDuration="402.177345ms" podCreationTimestamp="2025-11-20 20:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 20:52:18.4018385 +0000 UTC m=+6.147881170" watchObservedRunningTime="2025-11-20 20:52:18.402177345 +0000 UTC m=+6.148220017"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.420232     758 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460065     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/319a35be-d837-443f-9469-651c49930906-cni-cfg\") pod \"kindnet-f6wtm\" (UID: \"319a35be-d837-443f-9469-651c49930906\") " pod="kube-system/kindnet-f6wtm"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460113     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fe1dcc4-b41f-4da1-894e-e7fe935e3e63-lib-modules\") pod \"kube-proxy-vqk4x\" (UID: \"1fe1dcc4-b41f-4da1-894e-e7fe935e3e63\") " pod="kube-system/kube-proxy-vqk4x"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460142     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/319a35be-d837-443f-9469-651c49930906-xtables-lock\") pod \"kindnet-f6wtm\" (UID: \"319a35be-d837-443f-9469-651c49930906\") " pod="kube-system/kindnet-f6wtm"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460163     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6-tmp\") pod \"storage-provisioner\" (UID: \"ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6\") " pod="kube-system/storage-provisioner"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460415     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fe1dcc4-b41f-4da1-894e-e7fe935e3e63-xtables-lock\") pod \"kube-proxy-vqk4x\" (UID: \"1fe1dcc4-b41f-4da1-894e-e7fe935e3e63\") " pod="kube-system/kube-proxy-vqk4x"
	Nov 20 20:52:18 ha-922218 kubelet[758]: I1120 20:52:18.460598     758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/319a35be-d837-443f-9469-651c49930906-lib-modules\") pod \"kindnet-f6wtm\" (UID: \"319a35be-d837-443f-9469-651c49930906\") " pod="kube-system/kindnet-f6wtm"
	Nov 20 20:52:19 ha-922218 kubelet[758]: I1120 20:52:19.390398     758 scope.go:117] "RemoveContainer" containerID="dc3394ace8f04ea97097c48698b5fbe1c460c7357aea54da0f99a76c8c5578c6"
	Nov 20 20:52:20 ha-922218 kubelet[758]: I1120 20:52:20.397922     758 scope.go:117] "RemoveContainer" containerID="dc3394ace8f04ea97097c48698b5fbe1c460c7357aea54da0f99a76c8c5578c6"
	Nov 20 20:52:20 ha-922218 kubelet[758]: I1120 20:52:20.398303     758 scope.go:117] "RemoveContainer" containerID="fac2b6f885b5faaecbbea594acb9ce5d1f4225dec03b4f4c52d89ed9284f7411"
	Nov 20 20:52:20 ha-922218 kubelet[758]: E1120 20:52:20.398477     758 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6)\"" pod="kube-system/storage-provisioner" podUID="ba16b7d6-abbd-4b0f-9e59-e3ba833be3e6"
	Nov 20 20:52:25 ha-922218 kubelet[758]: I1120 20:52:25.559555     758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 20:52:35 ha-922218 kubelet[758]: I1120 20:52:35.344381     758 scope.go:117] "RemoveContainer" containerID="fac2b6f885b5faaecbbea594acb9ce5d1f4225dec03b4f4c52d89ed9284f7411"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-922218 -n ha-922218
helpers_test.go:269: (dbg) Run:  kubectl --context ha-922218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.37s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-251329 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-251329 --output=json --user=testUser: exit status 80 (2.367861683s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ab48458-00e9-4082-a306-856791f9dddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-251329 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"887f55bc-5d33-4350-b872-566de13799a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-20T21:01:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"1d9a77ab-002f-47ac-9bbb-0ae9f28360b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-251329 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.37s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.15s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-251329 --output=json --user=testUser
E1120 21:01:46.755414  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-251329 --output=json --user=testUser: exit status 80 (2.149666343s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2cb984cd-9997-4dbc-95bd-9c6560f0cbee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-251329 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d2f40491-5cce-4c53-9508-86964325aa14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-20T21:01:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"559e3984-01b8-43e3-8948-9cea99cdefb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-251329 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.15s)

                                                
                                    
x
+
TestPause/serial/Pause (6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-643572 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-643572 --alsologtostderr -v=5: exit status 80 (1.959232662s)

                                                
                                                
-- stdout --
	* Pausing node pause-643572 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:15:17.736530  445237 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:15:17.736839  445237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:15:17.736851  445237 out.go:374] Setting ErrFile to fd 2...
	I1120 21:15:17.736858  445237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:15:17.737052  445237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:15:17.737354  445237 out.go:368] Setting JSON to false
	I1120 21:15:17.737421  445237 mustload.go:66] Loading cluster: pause-643572
	I1120 21:15:17.737808  445237 config.go:182] Loaded profile config "pause-643572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:15:17.738258  445237 cli_runner.go:164] Run: docker container inspect pause-643572 --format={{.State.Status}}
	I1120 21:15:17.756367  445237 host.go:66] Checking if "pause-643572" exists ...
	I1120 21:15:17.756696  445237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:15:17.816068  445237 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-20 21:15:17.806521463 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:15:17.816742  445237 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-643572 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 21:15:17.976005  445237 out.go:179] * Pausing node pause-643572 ... 
	I1120 21:15:17.977753  445237 host.go:66] Checking if "pause-643572" exists ...
	I1120 21:15:17.978104  445237 ssh_runner.go:195] Run: systemctl --version
	I1120 21:15:17.978153  445237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:17.999265  445237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/pause-643572/id_rsa Username:docker}
	I1120 21:15:18.105398  445237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:15:18.121345  445237 pause.go:52] kubelet running: true
	I1120 21:15:18.121415  445237 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:15:18.289802  445237 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:15:18.289892  445237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:15:18.383803  445237 cri.go:89] found id: "27e8dac1f2fb4271263fc533fd41e1cf38812b3ed638df73636d8b7cd15d90a4"
	I1120 21:15:18.383905  445237 cri.go:89] found id: "ccfd8a18718c8944d736692e05fab028329a4505100981326f1f51098eb64249"
	I1120 21:15:18.383932  445237 cri.go:89] found id: "e3147a2781c7b62124a989a5b0c7e75dd0088c0ebeae242c1b0620793a49c175"
	I1120 21:15:18.383971  445237 cri.go:89] found id: "fd8d921236a2b699217abac74bc21b3fe80b439dba9372df6bd7f7926be275a8"
	I1120 21:15:18.383993  445237 cri.go:89] found id: "5e669533d87d058607e7b14b83038db9b6d970da0609f437c0d5f4da3bf93e74"
	I1120 21:15:18.384011  445237 cri.go:89] found id: "209b36591750a9245aaa6154d1de0f09139a7d6d2fa0bece02bfda3404624a2f"
	I1120 21:15:18.384046  445237 cri.go:89] found id: "7a37e9e48eb909697c853bc7da4f5c7c3b725e66987c122f5fbb4400e359b1e4"
	I1120 21:15:18.384061  445237 cri.go:89] found id: ""
	I1120 21:15:18.384153  445237 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:15:18.399155  445237 retry.go:31] will retry after 358.035087ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:15:18Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:15:18.760323  445237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:15:18.781677  445237 pause.go:52] kubelet running: false
	I1120 21:15:18.781739  445237 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:15:18.991581  445237 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:15:18.991807  445237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:15:19.101838  445237 cri.go:89] found id: "27e8dac1f2fb4271263fc533fd41e1cf38812b3ed638df73636d8b7cd15d90a4"
	I1120 21:15:19.101870  445237 cri.go:89] found id: "ccfd8a18718c8944d736692e05fab028329a4505100981326f1f51098eb64249"
	I1120 21:15:19.101899  445237 cri.go:89] found id: "e3147a2781c7b62124a989a5b0c7e75dd0088c0ebeae242c1b0620793a49c175"
	I1120 21:15:19.101906  445237 cri.go:89] found id: "fd8d921236a2b699217abac74bc21b3fe80b439dba9372df6bd7f7926be275a8"
	I1120 21:15:19.101910  445237 cri.go:89] found id: "5e669533d87d058607e7b14b83038db9b6d970da0609f437c0d5f4da3bf93e74"
	I1120 21:15:19.101915  445237 cri.go:89] found id: "209b36591750a9245aaa6154d1de0f09139a7d6d2fa0bece02bfda3404624a2f"
	I1120 21:15:19.101920  445237 cri.go:89] found id: "7a37e9e48eb909697c853bc7da4f5c7c3b725e66987c122f5fbb4400e359b1e4"
	I1120 21:15:19.101923  445237 cri.go:89] found id: ""
	I1120 21:15:19.101999  445237 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:15:19.118668  445237 retry.go:31] will retry after 248.501108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:15:19Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:15:19.368278  445237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:15:19.385887  445237 pause.go:52] kubelet running: false
	I1120 21:15:19.385964  445237 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:15:19.529141  445237 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:15:19.529250  445237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:15:19.604720  445237 cri.go:89] found id: "27e8dac1f2fb4271263fc533fd41e1cf38812b3ed638df73636d8b7cd15d90a4"
	I1120 21:15:19.604751  445237 cri.go:89] found id: "ccfd8a18718c8944d736692e05fab028329a4505100981326f1f51098eb64249"
	I1120 21:15:19.604758  445237 cri.go:89] found id: "e3147a2781c7b62124a989a5b0c7e75dd0088c0ebeae242c1b0620793a49c175"
	I1120 21:15:19.604764  445237 cri.go:89] found id: "fd8d921236a2b699217abac74bc21b3fe80b439dba9372df6bd7f7926be275a8"
	I1120 21:15:19.604769  445237 cri.go:89] found id: "5e669533d87d058607e7b14b83038db9b6d970da0609f437c0d5f4da3bf93e74"
	I1120 21:15:19.604774  445237 cri.go:89] found id: "209b36591750a9245aaa6154d1de0f09139a7d6d2fa0bece02bfda3404624a2f"
	I1120 21:15:19.604777  445237 cri.go:89] found id: "7a37e9e48eb909697c853bc7da4f5c7c3b725e66987c122f5fbb4400e359b1e4"
	I1120 21:15:19.604781  445237 cri.go:89] found id: ""
	I1120 21:15:19.604837  445237 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:15:19.619756  445237 out.go:203] 
	W1120 21:15:19.621094  445237 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:15:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:15:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:15:19.621122  445237 out.go:285] * 
	* 
	W1120 21:15:19.626362  445237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:15:19.627983  445237 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-643572 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-643572
helpers_test.go:243: (dbg) docker inspect pause-643572:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4",
	        "Created": "2025-11-20T21:14:30.981194784Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:14:31.030338057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/hosts",
	        "LogPath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4-json.log",
	        "Name": "/pause-643572",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-643572:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-643572",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4",
	                "LowerDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-643572",
	                "Source": "/var/lib/docker/volumes/pause-643572/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-643572",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-643572",
	                "name.minikube.sigs.k8s.io": "pause-643572",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d457fac682eeaaa3a2a8b6834f8890ea90674a05734462eb429822e146c5d4ff",
	            "SandboxKey": "/var/run/docker/netns/d457fac682ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-643572": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e4fcb374cc0c4ac1a66f944635319c2747cf3cfbcad77203adc396148b3ef983",
	                    "EndpointID": "4826255bdf02dcd46bc39b50e012e640a25cb8aa42f330a5477ceb91ed1662c0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "72:01:8d:46:7f:01",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-643572",
	                        "dbe3632f021c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-643572 -n pause-643572
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-643572 -n pause-643572: exit status 2 (411.426887ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-643572 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-643572 logs -n 25: (1.141622544s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-936763 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cri-dockerd --version                                                                                 │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl cat containerd --no-pager                                                                   │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cat /etc/containerd/config.toml                                                                       │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo containerd config dump                                                                                │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl cat crio --no-pager                                                                         │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo crio config                                                                                           │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ delete  │ -p cilium-936763                                                                                                            │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ start   │ -p force-systemd-env-267271 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-267271  │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:15 UTC │
	│ delete  │ -p force-systemd-env-267271                                                                                                 │ force-systemd-env-267271  │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ delete  │ -p NoKubernetes-806709                                                                                                      │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p force-systemd-flag-687992 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-687992 │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ start   │ -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ delete  │ -p offline-crio-735987                                                                                                      │ offline-crio-735987       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p pause-643572 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-643572              │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p cert-expiration-118194 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-118194    │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ pause   │ -p pause-643572 --alsologtostderr -v=5                                                                                      │ pause-643572              │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ ssh     │ -p NoKubernetes-806709 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:15:12
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:15:12.670015  442466 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:15:12.670174  442466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:15:12.670178  442466 out.go:374] Setting ErrFile to fd 2...
	I1120 21:15:12.670182  442466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:15:12.670588  442466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:15:12.671345  442466 out.go:368] Setting JSON to false
	I1120 21:15:12.673289  442466 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14255,"bootTime":1763659058,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:15:12.673429  442466 start.go:143] virtualization: kvm guest
	I1120 21:15:12.675408  442466 out.go:179] * [cert-expiration-118194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:15:12.678761  442466 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:15:12.678814  442466 notify.go:221] Checking for updates...
	I1120 21:15:12.682620  442466 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:15:12.684384  442466 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:15:12.686792  442466 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:15:12.688719  442466 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:15:12.691296  442466 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:15:12.693073  442466 config.go:182] Loaded profile config "NoKubernetes-806709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1120 21:15:12.693203  442466 config.go:182] Loaded profile config "force-systemd-flag-687992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:15:12.693398  442466 config.go:182] Loaded profile config "pause-643572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:15:12.693533  442466 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:15:12.735454  442466 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:15:12.735607  442466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:15:12.821935  442466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-20 21:15:12.810034431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:15:12.822026  442466 docker.go:319] overlay module found
	I1120 21:15:12.824697  442466 out.go:179] * Using the docker driver based on user configuration
	I1120 21:15:12.825909  442466 start.go:309] selected driver: docker
	I1120 21:15:12.825918  442466 start.go:930] validating driver "docker" against <nil>
	I1120 21:15:12.825929  442466 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:15:12.826585  442466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:15:12.928499  442466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-20 21:15:12.914351568 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:15:12.928804  442466 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:15:12.929041  442466 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 21:15:12.930861  442466 out.go:179] * Using Docker driver with root privileges
	I1120 21:15:12.932327  442466 cni.go:84] Creating CNI manager for ""
	I1120 21:15:12.932398  442466 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:15:12.932406  442466 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:15:12.932496  442466 start.go:353] cluster config:
	{Name:cert-expiration-118194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-118194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:15:12.936571  442466 out.go:179] * Starting "cert-expiration-118194" primary control-plane node in "cert-expiration-118194" cluster
	I1120 21:15:12.937807  442466 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:15:12.939268  442466 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:15:08.169683  440243 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:15:08.169973  440243 start.go:159] libmachine.API.Create for "force-systemd-flag-687992" (driver="docker")
	I1120 21:15:08.170008  440243 client.go:173] LocalClient.Create starting
	I1120 21:15:08.170107  440243 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:15:08.170144  440243 main.go:143] libmachine: Decoding PEM data...
	I1120 21:15:08.170164  440243 main.go:143] libmachine: Parsing certificate...
	I1120 21:15:08.170265  440243 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:15:08.170299  440243 main.go:143] libmachine: Decoding PEM data...
	I1120 21:15:08.170315  440243 main.go:143] libmachine: Parsing certificate...
	I1120 21:15:08.170779  440243 cli_runner.go:164] Run: docker network inspect force-systemd-flag-687992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:15:08.190317  440243 cli_runner.go:211] docker network inspect force-systemd-flag-687992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:15:08.190394  440243 network_create.go:284] running [docker network inspect force-systemd-flag-687992] to gather additional debugging logs...
	I1120 21:15:08.190415  440243 cli_runner.go:164] Run: docker network inspect force-systemd-flag-687992
	W1120 21:15:08.210046  440243 cli_runner.go:211] docker network inspect force-systemd-flag-687992 returned with exit code 1
	I1120 21:15:08.210091  440243 network_create.go:287] error running [docker network inspect force-systemd-flag-687992]: docker network inspect force-systemd-flag-687992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-687992 not found
	I1120 21:15:08.210106  440243 network_create.go:289] output of [docker network inspect force-systemd-flag-687992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-687992 not found
	
	** /stderr **
	I1120 21:15:08.210200  440243 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:15:08.228205  440243 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:15:08.228644  440243 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:15:08.229064  440243 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:15:08.229490  440243 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-51b5f54a9bfa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:ab:07:47:6e:ba} reservation:<nil>}
	I1120 21:15:08.229900  440243 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e4fcb374cc0c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:d9:f7:f5:dc:b3} reservation:<nil>}
	I1120 21:15:08.230323  440243 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-1de9a6e10c27 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:e6:86:b1:21:70} reservation:<nil>}
	I1120 21:15:08.230953  440243 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7a680}
	I1120 21:15:08.230985  440243 network_create.go:124] attempt to create docker network force-systemd-flag-687992 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1120 21:15:08.231032  440243 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-687992 force-systemd-flag-687992
	I1120 21:15:08.287630  440243 network_create.go:108] docker network force-systemd-flag-687992 192.168.103.0/24 created
	I1120 21:15:08.287700  440243 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-flag-687992" container
	I1120 21:15:08.287776  440243 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:15:08.307064  440243 cli_runner.go:164] Run: docker volume create force-systemd-flag-687992 --label name.minikube.sigs.k8s.io=force-systemd-flag-687992 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:15:08.457266  440243 oci.go:103] Successfully created a docker volume force-systemd-flag-687992
	I1120 21:15:08.457354  440243 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-687992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-687992 --entrypoint /usr/bin/test -v force-systemd-flag-687992:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:15:08.890954  440243 oci.go:107] Successfully prepared a docker volume force-systemd-flag-687992
	I1120 21:15:08.891022  440243 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:15:08.891034  440243 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:15:08.891106  440243 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-687992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:15:12.338777  440243 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-687992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (3.447601129s)
	I1120 21:15:12.338812  440243 kic.go:203] duration metric: took 3.447774491s to extract preloaded images to volume ...
	W1120 21:15:12.338904  440243 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:15:12.338938  440243 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:15:12.338980  440243 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:15:12.417058  440243 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-687992 --name force-systemd-flag-687992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-687992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-687992 --network force-systemd-flag-687992 --ip 192.168.103.2 --volume force-systemd-flag-687992:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:15:12.909407  440243 cli_runner.go:164] Run: docker container inspect force-systemd-flag-687992 --format={{.State.Running}}
	I1120 21:15:12.938336  440243 cli_runner.go:164] Run: docker container inspect force-systemd-flag-687992 --format={{.State.Status}}
	I1120 21:15:12.940442  442466 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:15:12.940483  442466 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:15:12.940494  442466 cache.go:65] Caching tarball of preloaded images
	I1120 21:15:12.940512  442466 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:15:12.940594  442466 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:15:12.940604  442466 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:15:12.940739  442466 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/cert-expiration-118194/config.json ...
	I1120 21:15:12.940762  442466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/cert-expiration-118194/config.json: {Name:mk4e88f5599ee71a2362c1d4e82a0d51c1a6f75b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:15:12.969741  442466 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:15:12.969759  442466 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:15:12.969774  442466 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:15:12.969812  442466 start.go:360] acquireMachinesLock for cert-expiration-118194: {Name:mk11221ba0043d36859e6141f8b60a31e3a87a07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:15:12.969938  442466 start.go:364] duration metric: took 96.657µs to acquireMachinesLock for "cert-expiration-118194"
	I1120 21:15:12.969964  442466 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-118194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-118194 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:15:12.970078  442466 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:15:09.010996  440643 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:15:09.011203  440643 start.go:159] libmachine.API.Create for "NoKubernetes-806709" (driver="docker")
	I1120 21:15:09.011248  440643 client.go:173] LocalClient.Create starting
	I1120 21:15:09.011337  440643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:15:09.011378  440643 main.go:143] libmachine: Decoding PEM data...
	I1120 21:15:09.011395  440643 main.go:143] libmachine: Parsing certificate...
	I1120 21:15:09.011459  440643 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:15:09.011488  440643 main.go:143] libmachine: Decoding PEM data...
	I1120 21:15:09.011505  440643 main.go:143] libmachine: Parsing certificate...
	I1120 21:15:09.011852  440643 cli_runner.go:164] Run: docker network inspect NoKubernetes-806709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:15:09.031806  440643 cli_runner.go:211] docker network inspect NoKubernetes-806709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:15:09.031872  440643 network_create.go:284] running [docker network inspect NoKubernetes-806709] to gather additional debugging logs...
	I1120 21:15:09.031892  440643 cli_runner.go:164] Run: docker network inspect NoKubernetes-806709
	W1120 21:15:09.065274  440643 cli_runner.go:211] docker network inspect NoKubernetes-806709 returned with exit code 1
	I1120 21:15:09.065315  440643 network_create.go:287] error running [docker network inspect NoKubernetes-806709]: docker network inspect NoKubernetes-806709: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-806709 not found
	I1120 21:15:09.065333  440643 network_create.go:289] output of [docker network inspect NoKubernetes-806709]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-806709 not found
	
	** /stderr **
	I1120 21:15:09.065446  440643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:15:09.090753  440643 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:15:09.091604  440643 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:15:09.092362  440643 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:15:09.093070  440643 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-51b5f54a9bfa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:ab:07:47:6e:ba} reservation:<nil>}
	I1120 21:15:09.095935  440643 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e4fcb374cc0c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:0e:d9:f7:f5:dc:b3} reservation:<nil>}
	I1120 21:15:09.096937  440643 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017f5450}
	I1120 21:15:09.096969  440643 network_create.go:124] attempt to create docker network NoKubernetes-806709 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1120 21:15:09.097031  440643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-806709 NoKubernetes-806709
	I1120 21:15:09.167595  440643 network_create.go:108] docker network NoKubernetes-806709 192.168.94.0/24 created
	I1120 21:15:09.167645  440643 kic.go:121] calculated static IP "192.168.94.2" for the "NoKubernetes-806709" container
	I1120 21:15:09.167763  440643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:15:09.191604  440643 cli_runner.go:164] Run: docker volume create NoKubernetes-806709 --label name.minikube.sigs.k8s.io=NoKubernetes-806709 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:15:09.214479  440643 oci.go:103] Successfully created a docker volume NoKubernetes-806709
	I1120 21:15:09.214586  440643 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-806709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-806709 --entrypoint /usr/bin/test -v NoKubernetes-806709:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:15:10.477400  440643 cli_runner.go:217] Completed: docker run --rm --name NoKubernetes-806709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-806709 --entrypoint /usr/bin/test -v NoKubernetes-806709:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (1.262746531s)
	I1120 21:15:10.477442  440643 oci.go:107] Successfully prepared a docker volume NoKubernetes-806709
	I1120 21:15:10.477497  440643 preload.go:178] Skipping preload logic due to --no-kubernetes flag
	W1120 21:15:10.477604  440643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:15:10.477674  440643 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:15:10.477727  440643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:15:10.545127  440643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-806709 --name NoKubernetes-806709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-806709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-806709 --network NoKubernetes-806709 --ip 192.168.94.2 --volume NoKubernetes-806709:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:15:12.232074  440643 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-806709 --name NoKubernetes-806709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-806709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-806709 --network NoKubernetes-806709 --ip 192.168.94.2 --volume NoKubernetes-806709:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a: (1.686878662s)
	I1120 21:15:12.232168  440643 cli_runner.go:164] Run: docker container inspect NoKubernetes-806709 --format={{.State.Running}}
	I1120 21:15:12.254986  440643 cli_runner.go:164] Run: docker container inspect NoKubernetes-806709 --format={{.State.Status}}
	I1120 21:15:12.275332  440643 cli_runner.go:164] Run: docker exec NoKubernetes-806709 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:15:12.331725  440643 oci.go:144] the created container "NoKubernetes-806709" has a running status.
	I1120 21:15:12.331762  440643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa...
	I1120 21:15:12.587703  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1120 21:15:12.587768  440643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:15:12.647787  440643 cli_runner.go:164] Run: docker container inspect NoKubernetes-806709 --format={{.State.Status}}
	I1120 21:15:12.672772  440643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:15:12.672899  440643 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-806709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:15:12.753762  440643 cli_runner.go:164] Run: docker container inspect NoKubernetes-806709 --format={{.State.Status}}
	I1120 21:15:12.784893  440643 machine.go:94] provisionDockerMachine start ...
	I1120 21:15:12.785109  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:12.816507  440643 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:12.816800  440643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1120 21:15:12.816816  440643 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:15:12.973243  440643 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-806709
	
	I1120 21:15:12.973277  440643 ubuntu.go:182] provisioning hostname "NoKubernetes-806709"
	I1120 21:15:12.973338  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:12.999195  440643 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:12.999560  440643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1120 21:15:12.999582  440643 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-806709 && echo "NoKubernetes-806709" | sudo tee /etc/hostname
	I1120 21:15:13.191004  440643 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-806709
	
	I1120 21:15:13.191258  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:13.228409  440643 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:13.228722  440643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1120 21:15:13.228747  440643 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-806709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-806709/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-806709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:15:13.394807  440643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:15:13.394846  440643 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:15:13.394903  440643 ubuntu.go:190] setting up certificates
	I1120 21:15:13.394922  440643 provision.go:84] configureAuth start
	I1120 21:15:13.394993  440643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-806709
	I1120 21:15:13.419707  440643 provision.go:143] copyHostCerts
	I1120 21:15:13.419743  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:15:13.419775  440643 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:15:13.419782  440643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:15:13.419842  440643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:15:13.419919  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:15:13.419940  440643 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:15:13.419945  440643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:15:13.419983  440643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:15:13.420042  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:15:13.420061  440643 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:15:13.420067  440643 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:15:13.420099  440643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:15:13.420163  440643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-806709 san=[127.0.0.1 192.168.94.2 NoKubernetes-806709 localhost minikube]
	I1120 21:15:10.083755  441148 out.go:252] * Updating the running docker "pause-643572" container ...
	I1120 21:15:10.083810  441148 machine.go:94] provisionDockerMachine start ...
	I1120 21:15:10.083910  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:10.107459  441148 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:10.107792  441148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1120 21:15:10.107813  441148 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:15:10.247696  441148 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-643572
	
	I1120 21:15:10.247744  441148 ubuntu.go:182] provisioning hostname "pause-643572"
	I1120 21:15:10.247818  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:10.271908  441148 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:10.272253  441148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1120 21:15:10.272275  441148 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-643572 && echo "pause-643572" | sudo tee /etc/hostname
	I1120 21:15:10.420765  441148 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-643572
	
	I1120 21:15:10.420855  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:10.440572  441148 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:10.440813  441148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1120 21:15:10.440835  441148 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-643572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-643572/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-643572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:15:10.583690  441148 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:15:10.583723  441148 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:15:10.583758  441148 ubuntu.go:190] setting up certificates
	I1120 21:15:10.583772  441148 provision.go:84] configureAuth start
	I1120 21:15:10.583836  441148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-643572
	I1120 21:15:10.603324  441148 provision.go:143] copyHostCerts
	I1120 21:15:10.603504  441148 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:15:10.603651  441148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:15:10.603747  441148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:15:10.603891  441148 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:15:10.603916  441148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:15:10.603952  441148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:15:10.604022  441148 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:15:10.604030  441148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:15:10.604055  441148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:15:10.604110  441148 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.pause-643572 san=[127.0.0.1 192.168.85.2 localhost minikube pause-643572]
	I1120 21:15:10.862973  441148 provision.go:177] copyRemoteCerts
	I1120 21:15:10.863088  441148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:15:10.863158  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:10.880976  441148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/pause-643572/id_rsa Username:docker}
	I1120 21:15:10.980350  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1120 21:15:10.999306  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:15:11.041376  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:15:11.061040  441148 provision.go:87] duration metric: took 477.254389ms to configureAuth
	I1120 21:15:11.061069  441148 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:15:11.061332  441148 config.go:182] Loaded profile config "pause-643572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:15:11.061453  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:11.081327  441148 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:11.081634  441148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32983 <nil> <nil>}
	I1120 21:15:11.081657  441148 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:15:12.134465  441148 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:15:12.134498  441148 machine.go:97] duration metric: took 2.050676647s to provisionDockerMachine
	I1120 21:15:12.134514  441148 start.go:293] postStartSetup for "pause-643572" (driver="docker")
	I1120 21:15:12.134529  441148 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:15:12.134593  441148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:15:12.134649  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:12.155000  441148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/pause-643572/id_rsa Username:docker}
	I1120 21:15:12.255550  441148 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:15:12.260004  441148 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:15:12.260033  441148 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:15:12.260046  441148 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:15:12.260104  441148 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:15:12.260234  441148 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:15:12.260380  441148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:15:12.269475  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:15:12.293959  441148 start.go:296] duration metric: took 159.425246ms for postStartSetup
	I1120 21:15:12.294169  441148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:15:12.294245  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:12.317492  441148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/pause-643572/id_rsa Username:docker}
	I1120 21:15:12.422732  441148 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:15:12.428433  441148 fix.go:56] duration metric: took 2.377407309s for fixHost
	I1120 21:15:12.428462  441148 start.go:83] releasing machines lock for "pause-643572", held for 2.377459134s
	I1120 21:15:12.428531  441148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-643572
	I1120 21:15:12.449538  441148 ssh_runner.go:195] Run: cat /version.json
	I1120 21:15:12.449555  441148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:15:12.449600  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:12.449618  441148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-643572
	I1120 21:15:12.472734  441148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/pause-643572/id_rsa Username:docker}
	I1120 21:15:12.474066  441148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/pause-643572/id_rsa Username:docker}
	I1120 21:15:12.581979  441148 ssh_runner.go:195] Run: systemctl --version
	I1120 21:15:12.674591  441148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:15:12.735058  441148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:15:12.742434  441148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:15:12.742502  441148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:15:12.757036  441148 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:15:12.757173  441148 start.go:496] detecting cgroup driver to use...
	I1120 21:15:12.757426  441148 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:15:12.757493  441148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:15:12.780235  441148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:15:12.799512  441148 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:15:12.799600  441148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:15:12.824823  441148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:15:12.842723  441148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:15:13.030788  441148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:15:13.232990  441148 docker.go:234] disabling docker service ...
	I1120 21:15:13.233050  441148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:15:13.255369  441148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:15:13.273912  441148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:15:13.471681  441148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:15:13.626861  441148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:15:13.642596  441148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:15:13.662010  441148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:15:13.662062  441148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.674285  441148 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:15:13.674351  441148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.692383  441148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.704767  441148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.718311  441148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:15:13.731687  441148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.743998  441148 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.754639  441148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:13.765799  441148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:15:13.775030  441148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:15:13.785504  441148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:15:13.922871  441148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:15:14.141460  441148 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:15:14.141586  441148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:15:14.146069  441148 start.go:564] Will wait 60s for crictl version
	I1120 21:15:14.146148  441148 ssh_runner.go:195] Run: which crictl
	I1120 21:15:14.150183  441148 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:15:14.177292  441148 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:15:14.177379  441148 ssh_runner.go:195] Run: crio --version
	I1120 21:15:14.214582  441148 ssh_runner.go:195] Run: crio --version
	I1120 21:15:14.252636  441148 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:15:14.254044  441148 cli_runner.go:164] Run: docker network inspect pause-643572 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:15:14.275588  441148 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:15:14.282416  441148 kubeadm.go:884] updating cluster {Name:pause-643572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-643572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:15:14.282583  441148 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:15:14.282639  441148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:15:14.322348  441148 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:15:14.322379  441148 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:15:14.322435  441148 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:15:14.349421  441148 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:15:14.349450  441148 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:15:14.349460  441148 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1120 21:15:14.349592  441148 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-643572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-643572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:15:14.349678  441148 ssh_runner.go:195] Run: crio config
	I1120 21:15:14.412089  441148 cni.go:84] Creating CNI manager for ""
	I1120 21:15:14.412108  441148 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:15:14.412123  441148 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:15:14.412144  441148 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-643572 NodeName:pause-643572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:15:14.412301  441148 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-643572"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:15:14.412374  441148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:15:14.421118  441148 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:15:14.421182  441148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:15:14.429447  441148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1120 21:15:14.444087  441148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:15:14.458037  441148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1120 21:15:14.472351  441148 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:15:14.476895  441148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:15:14.606886  441148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:15:14.623358  441148 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572 for IP: 192.168.85.2
	I1120 21:15:14.623386  441148 certs.go:195] generating shared ca certs ...
	I1120 21:15:14.623416  441148 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:15:14.623587  441148 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:15:14.623646  441148 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:15:14.623660  441148 certs.go:257] generating profile certs ...
	I1120 21:15:14.623782  441148 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/client.key
	I1120 21:15:14.623861  441148 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/apiserver.key.78e396de
	I1120 21:15:14.623924  441148 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/proxy-client.key
	I1120 21:15:14.624074  441148 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:15:14.624116  441148 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:15:14.624130  441148 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:15:14.624165  441148 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:15:14.624197  441148 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:15:14.624251  441148 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:15:14.624309  441148 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:15:14.624993  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:15:14.645076  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:15:14.664282  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:15:14.683716  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:15:14.706201  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1120 21:15:14.730564  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:15:14.752715  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:15:14.773647  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:15:14.793243  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:15:14.814262  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:15:14.835630  441148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:15:14.855236  441148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:15:14.869762  441148 ssh_runner.go:195] Run: openssl version
	I1120 21:15:14.876117  441148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:15:14.884383  441148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:15:14.894113  441148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:15:14.898639  441148 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:15:14.898707  441148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:15:14.938417  441148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:15:14.946700  441148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:15:14.956259  441148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:15:14.966955  441148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:15:14.971299  441148 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:15:14.971377  441148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:15:15.015291  441148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:15:15.023812  441148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:15:15.031896  441148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:15:15.039739  441148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:15:15.044212  441148 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:15:15.044297  441148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:15:15.099050  441148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:15:15.107543  441148 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:15:15.111802  441148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:15:15.151850  441148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:15:15.197262  441148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:15:15.232982  441148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:15:15.277808  441148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:15:15.319916  441148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:15:15.372489  441148 kubeadm.go:401] StartCluster: {Name:pause-643572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-643572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:15:15.372656  441148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:15:15.372715  441148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:15:15.409413  441148 cri.go:89] found id: "27e8dac1f2fb4271263fc533fd41e1cf38812b3ed638df73636d8b7cd15d90a4"
	I1120 21:15:15.409439  441148 cri.go:89] found id: "ccfd8a18718c8944d736692e05fab028329a4505100981326f1f51098eb64249"
	I1120 21:15:15.409445  441148 cri.go:89] found id: "e3147a2781c7b62124a989a5b0c7e75dd0088c0ebeae242c1b0620793a49c175"
	I1120 21:15:15.409449  441148 cri.go:89] found id: "fd8d921236a2b699217abac74bc21b3fe80b439dba9372df6bd7f7926be275a8"
	I1120 21:15:15.409454  441148 cri.go:89] found id: "5e669533d87d058607e7b14b83038db9b6d970da0609f437c0d5f4da3bf93e74"
	I1120 21:15:15.409458  441148 cri.go:89] found id: "209b36591750a9245aaa6154d1de0f09139a7d6d2fa0bece02bfda3404624a2f"
	I1120 21:15:15.409462  441148 cri.go:89] found id: "7a37e9e48eb909697c853bc7da4f5c7c3b725e66987c122f5fbb4400e359b1e4"
	I1120 21:15:15.409466  441148 cri.go:89] found id: ""
	I1120 21:15:15.409517  441148 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:15:15.423785  441148 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:15:15Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:15:15.423867  441148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:15:15.434251  441148 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:15:15.434274  441148 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:15:15.434320  441148 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:15:15.447416  441148 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:15:15.448118  441148 kubeconfig.go:125] found "pause-643572" server: "https://192.168.85.2:8443"
	I1120 21:15:15.449011  441148 kapi.go:59] client config for pause-643572: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:15:15.449633  441148 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1120 21:15:15.449652  441148 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1120 21:15:15.449660  441148 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1120 21:15:15.449669  441148 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1120 21:15:15.449675  441148 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1120 21:15:15.450168  441148 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:15:15.462542  441148 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1120 21:15:15.462582  441148 kubeadm.go:602] duration metric: took 28.300865ms to restartPrimaryControlPlane
	I1120 21:15:15.462595  441148 kubeadm.go:403] duration metric: took 90.123439ms to StartCluster
	I1120 21:15:15.462615  441148 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:15:15.462699  441148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:15:15.463473  441148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:15:15.463754  441148 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:15:15.463840  441148 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:15:15.464029  441148 config.go:182] Loaded profile config "pause-643572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:15:15.466712  441148 out.go:179] * Enabled addons: 
	I1120 21:15:15.466712  441148 out.go:179] * Verifying Kubernetes components...
	I1120 21:15:15.468390  441148 addons.go:515] duration metric: took 4.550237ms for enable addons: enabled=[]
	I1120 21:15:15.468431  441148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:15:15.600126  441148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:15:15.616707  441148 node_ready.go:35] waiting up to 6m0s for node "pause-643572" to be "Ready" ...
	I1120 21:15:15.628367  441148 node_ready.go:49] node "pause-643572" is "Ready"
	I1120 21:15:15.628400  441148 node_ready.go:38] duration metric: took 11.662627ms for node "pause-643572" to be "Ready" ...
	I1120 21:15:15.628431  441148 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:15:15.628488  441148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:15:15.641081  441148 api_server.go:72] duration metric: took 177.285748ms to wait for apiserver process to appear ...
	I1120 21:15:15.641108  441148 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:15:15.641132  441148 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1120 21:15:15.645711  441148 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1120 21:15:15.646867  441148 api_server.go:141] control plane version: v1.34.1
	I1120 21:15:15.646897  441148 api_server.go:131] duration metric: took 5.779987ms to wait for apiserver health ...
	I1120 21:15:15.646908  441148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:15:15.650569  441148 system_pods.go:59] 7 kube-system pods found
	I1120 21:15:15.650617  441148 system_pods.go:61] "coredns-66bc5c9577-r6qfd" [fbadec5c-64a9-4074-8974-ee57a834a726] Running
	I1120 21:15:15.650628  441148 system_pods.go:61] "etcd-pause-643572" [f5b05597-2be6-468a-960e-8d5c53a46c2f] Running
	I1120 21:15:15.650634  441148 system_pods.go:61] "kindnet-hgd2l" [dbe57678-242c-461f-8fb9-ab0f9f330549] Running
	I1120 21:15:15.650639  441148 system_pods.go:61] "kube-apiserver-pause-643572" [7774c269-6bfb-45b4-9bb2-88dabd9866d7] Running
	I1120 21:15:15.650645  441148 system_pods.go:61] "kube-controller-manager-pause-643572" [461f10ed-c7cc-43a0-ba2e-4ecb052545bb] Running
	I1120 21:15:15.650655  441148 system_pods.go:61] "kube-proxy-swvst" [588fdc04-3336-4f17-8703-6d406b356e59] Running
	I1120 21:15:15.650660  441148 system_pods.go:61] "kube-scheduler-pause-643572" [e730f7fb-699a-40bc-89bf-a1e0e5c69ba0] Running
	I1120 21:15:15.650670  441148 system_pods.go:74] duration metric: took 3.75475ms to wait for pod list to return data ...
	I1120 21:15:15.650686  441148 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:15:15.652544  441148 default_sa.go:45] found service account: "default"
	I1120 21:15:15.652569  441148 default_sa.go:55] duration metric: took 1.872852ms for default service account to be created ...
	I1120 21:15:15.652580  441148 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:15:15.655366  441148 system_pods.go:86] 7 kube-system pods found
	I1120 21:15:15.655407  441148 system_pods.go:89] "coredns-66bc5c9577-r6qfd" [fbadec5c-64a9-4074-8974-ee57a834a726] Running
	I1120 21:15:15.655417  441148 system_pods.go:89] "etcd-pause-643572" [f5b05597-2be6-468a-960e-8d5c53a46c2f] Running
	I1120 21:15:15.655424  441148 system_pods.go:89] "kindnet-hgd2l" [dbe57678-242c-461f-8fb9-ab0f9f330549] Running
	I1120 21:15:15.655434  441148 system_pods.go:89] "kube-apiserver-pause-643572" [7774c269-6bfb-45b4-9bb2-88dabd9866d7] Running
	I1120 21:15:15.655446  441148 system_pods.go:89] "kube-controller-manager-pause-643572" [461f10ed-c7cc-43a0-ba2e-4ecb052545bb] Running
	I1120 21:15:15.655454  441148 system_pods.go:89] "kube-proxy-swvst" [588fdc04-3336-4f17-8703-6d406b356e59] Running
	I1120 21:15:15.655461  441148 system_pods.go:89] "kube-scheduler-pause-643572" [e730f7fb-699a-40bc-89bf-a1e0e5c69ba0] Running
	I1120 21:15:15.655477  441148 system_pods.go:126] duration metric: took 2.88969ms to wait for k8s-apps to be running ...
	I1120 21:15:15.655487  441148 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:15:15.655550  441148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:15:15.673435  441148 system_svc.go:56] duration metric: took 17.9358ms WaitForService to wait for kubelet
	I1120 21:15:15.673569  441148 kubeadm.go:587] duration metric: took 209.776063ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:15:15.673613  441148 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:15:15.676621  441148 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:15:15.676651  441148 node_conditions.go:123] node cpu capacity is 8
	I1120 21:15:15.676666  441148 node_conditions.go:105] duration metric: took 3.029369ms to run NodePressure ...
	I1120 21:15:15.676682  441148 start.go:242] waiting for startup goroutines ...
	I1120 21:15:15.676692  441148 start.go:247] waiting for cluster config update ...
	I1120 21:15:15.676703  441148 start.go:256] writing updated cluster config ...
	I1120 21:15:15.677063  441148 ssh_runner.go:195] Run: rm -f paused
	I1120 21:15:15.682257  441148 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:15:15.682717  441148 kapi.go:59] client config for pause-643572: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/client.crt", KeyFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/profiles/pause-643572/client.key", CAFile:"/home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825980), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1120 21:15:15.685735  441148 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r6qfd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:15.690175  441148 pod_ready.go:94] pod "coredns-66bc5c9577-r6qfd" is "Ready"
	I1120 21:15:15.690197  441148 pod_ready.go:86] duration metric: took 4.438181ms for pod "coredns-66bc5c9577-r6qfd" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:15.692433  441148 pod_ready.go:83] waiting for pod "etcd-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:15.696366  441148 pod_ready.go:94] pod "etcd-pause-643572" is "Ready"
	I1120 21:15:15.696396  441148 pod_ready.go:86] duration metric: took 3.943654ms for pod "etcd-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:15.698515  441148 pod_ready.go:83] waiting for pod "kube-apiserver-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:15.702089  441148 pod_ready.go:94] pod "kube-apiserver-pause-643572" is "Ready"
	I1120 21:15:15.702106  441148 pod_ready.go:86] duration metric: took 3.569076ms for pod "kube-apiserver-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:15.703813  441148 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:16.086685  441148 pod_ready.go:94] pod "kube-controller-manager-pause-643572" is "Ready"
	I1120 21:15:16.086722  441148 pod_ready.go:86] duration metric: took 382.874087ms for pod "kube-controller-manager-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:16.286669  441148 pod_ready.go:83] waiting for pod "kube-proxy-swvst" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:16.686682  441148 pod_ready.go:94] pod "kube-proxy-swvst" is "Ready"
	I1120 21:15:16.686711  441148 pod_ready.go:86] duration metric: took 400.016696ms for pod "kube-proxy-swvst" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:16.886797  441148 pod_ready.go:83] waiting for pod "kube-scheduler-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:17.286844  441148 pod_ready.go:94] pod "kube-scheduler-pause-643572" is "Ready"
	I1120 21:15:17.286870  441148 pod_ready.go:86] duration metric: took 400.048269ms for pod "kube-scheduler-pause-643572" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:15:17.286881  441148 pod_ready.go:40] duration metric: took 1.604595432s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:15:17.331358  441148 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:15:17.458926  441148 out.go:179] * Done! kubectl is now configured to use "pause-643572" cluster and "default" namespace by default
	I1120 21:15:12.972824  442466 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:15:12.973105  442466 start.go:159] libmachine.API.Create for "cert-expiration-118194" (driver="docker")
	I1120 21:15:12.973132  442466 client.go:173] LocalClient.Create starting
	I1120 21:15:12.973235  442466 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:15:12.973271  442466 main.go:143] libmachine: Decoding PEM data...
	I1120 21:15:12.973286  442466 main.go:143] libmachine: Parsing certificate...
	I1120 21:15:12.973348  442466 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:15:12.973373  442466 main.go:143] libmachine: Decoding PEM data...
	I1120 21:15:12.973385  442466 main.go:143] libmachine: Parsing certificate...
	I1120 21:15:12.973786  442466 cli_runner.go:164] Run: docker network inspect cert-expiration-118194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:15:13.000861  442466 cli_runner.go:211] docker network inspect cert-expiration-118194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:15:13.000949  442466 network_create.go:284] running [docker network inspect cert-expiration-118194] to gather additional debugging logs...
	I1120 21:15:13.000967  442466 cli_runner.go:164] Run: docker network inspect cert-expiration-118194
	W1120 21:15:13.025802  442466 cli_runner.go:211] docker network inspect cert-expiration-118194 returned with exit code 1
	I1120 21:15:13.025832  442466 network_create.go:287] error running [docker network inspect cert-expiration-118194]: docker network inspect cert-expiration-118194: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-118194 not found
	I1120 21:15:13.025848  442466 network_create.go:289] output of [docker network inspect cert-expiration-118194]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-118194 not found
	
	** /stderr **
	I1120 21:15:13.025947  442466 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:15:13.052150  442466 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:15:13.053404  442466 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:15:13.054092  442466 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:15:13.055173  442466 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002672700}
	I1120 21:15:13.055209  442466 network_create.go:124] attempt to create docker network cert-expiration-118194 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 21:15:13.055273  442466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-118194 cert-expiration-118194
	I1120 21:15:13.140603  442466 network_create.go:108] docker network cert-expiration-118194 192.168.76.0/24 created
	I1120 21:15:13.140636  442466 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-118194" container
	I1120 21:15:13.140714  442466 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:15:13.164818  442466 cli_runner.go:164] Run: docker volume create cert-expiration-118194 --label name.minikube.sigs.k8s.io=cert-expiration-118194 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:15:13.194328  442466 oci.go:103] Successfully created a docker volume cert-expiration-118194
	I1120 21:15:13.194398  442466 cli_runner.go:164] Run: docker run --rm --name cert-expiration-118194-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-118194 --entrypoint /usr/bin/test -v cert-expiration-118194:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:15:13.747514  442466 oci.go:107] Successfully prepared a docker volume cert-expiration-118194
	I1120 21:15:13.747610  442466 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:15:13.747621  442466 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:15:13.747683  442466 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-118194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1120 21:15:12.961783  440243 cli_runner.go:164] Run: docker exec force-systemd-flag-687992 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:15:13.032064  440243 oci.go:144] the created container "force-systemd-flag-687992" has a running status.
	I1120 21:15:13.032106  440243 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa...
	I1120 21:15:13.262892  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1120 21:15:13.262948  440243 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:15:13.299771  440243 cli_runner.go:164] Run: docker container inspect force-systemd-flag-687992 --format={{.State.Status}}
	I1120 21:15:13.342628  440243 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:15:13.342652  440243 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-687992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:15:13.417038  440243 cli_runner.go:164] Run: docker container inspect force-systemd-flag-687992 --format={{.State.Status}}
	I1120 21:15:13.459089  440243 machine.go:94] provisionDockerMachine start ...
	I1120 21:15:13.459240  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:13.492026  440243 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:13.492461  440243 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1120 21:15:13.492514  440243 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:15:13.648473  440243 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-687992
	
	I1120 21:15:13.648519  440243 ubuntu.go:182] provisioning hostname "force-systemd-flag-687992"
	I1120 21:15:13.648601  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:13.670504  440243 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:13.670825  440243 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1120 21:15:13.670844  440243 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-687992 && echo "force-systemd-flag-687992" | sudo tee /etc/hostname
	I1120 21:15:13.846395  440243 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-687992
	
	I1120 21:15:13.846468  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:13.871047  440243 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:13.871358  440243 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1120 21:15:13.871403  440243 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-687992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-687992/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-687992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:15:14.025760  440243 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:15:14.025796  440243 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:15:14.025847  440243 ubuntu.go:190] setting up certificates
	I1120 21:15:14.025864  440243 provision.go:84] configureAuth start
	I1120 21:15:14.025942  440243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-687992
	I1120 21:15:14.046439  440243 provision.go:143] copyHostCerts
	I1120 21:15:14.046484  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:15:14.046526  440243 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:15:14.046541  440243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:15:14.046635  440243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:15:14.046736  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:15:14.046756  440243 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:15:14.046763  440243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:15:14.046795  440243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:15:14.046854  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:15:14.046882  440243 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:15:14.046893  440243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:15:14.046939  440243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:15:14.047018  440243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-687992 san=[127.0.0.1 192.168.103.2 force-systemd-flag-687992 localhost minikube]
	I1120 21:15:14.881488  440243 provision.go:177] copyRemoteCerts
	I1120 21:15:14.881555  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:15:14.881602  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:14.903762  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa Username:docker}
	I1120 21:15:15.006044  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:15:15.006114  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1120 21:15:15.028651  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:15:15.028723  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:15:15.050088  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:15:15.050152  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:15:15.069563  440243 provision.go:87] duration metric: took 1.043676723s to configureAuth
	I1120 21:15:15.069594  440243 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:15:15.069794  440243 config.go:182] Loaded profile config "force-systemd-flag-687992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:15:15.069970  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:15.097473  440243 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:15.097830  440243 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1120 21:15:15.097856  440243 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:15:15.412439  440243 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:15:15.412469  440243 machine.go:97] duration metric: took 1.953338279s to provisionDockerMachine
	I1120 21:15:15.412483  440243 client.go:176] duration metric: took 7.242469078s to LocalClient.Create
	I1120 21:15:15.412508  440243 start.go:167] duration metric: took 7.242538916s to libmachine.API.Create "force-systemd-flag-687992"
	I1120 21:15:15.412521  440243 start.go:293] postStartSetup for "force-systemd-flag-687992" (driver="docker")
	I1120 21:15:15.412534  440243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:15:15.412597  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:15:15.412644  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:15.434155  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa Username:docker}
	I1120 21:15:15.546284  440243 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:15:15.550512  440243 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:15:15.550550  440243 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:15:15.550565  440243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:15:15.550636  440243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:15:15.550752  440243 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:15:15.550779  440243 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 21:15:15.550905  440243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:15:15.559496  440243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:15:15.582144  440243 start.go:296] duration metric: took 169.606206ms for postStartSetup
	I1120 21:15:15.582608  440243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-687992
	I1120 21:15:15.603209  440243 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/force-systemd-flag-687992/config.json ...
	I1120 21:15:15.603551  440243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:15:15.603610  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:15.624523  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa Username:docker}
	I1120 21:15:15.726339  440243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:15:15.732200  440243 start.go:128] duration metric: took 7.565365391s to createHost
	I1120 21:15:15.732296  440243 start.go:83] releasing machines lock for "force-systemd-flag-687992", held for 7.565588056s
	I1120 21:15:15.732370  440243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-687992
	I1120 21:15:15.752944  440243 ssh_runner.go:195] Run: cat /version.json
	I1120 21:15:15.753004  440243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:15:15.753021  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:15.753078  440243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-687992
	I1120 21:15:15.774546  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa Username:docker}
	I1120 21:15:15.774792  440243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/force-systemd-flag-687992/id_rsa Username:docker}
	I1120 21:15:15.925416  440243 ssh_runner.go:195] Run: systemctl --version
	I1120 21:15:15.932879  440243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:15:15.971775  440243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:15:15.977046  440243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:15:15.977120  440243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:15:16.358974  440243 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:15:16.359001  440243 start.go:496] detecting cgroup driver to use...
	I1120 21:15:16.359018  440243 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1120 21:15:16.359081  440243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:15:16.376646  440243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:15:16.390674  440243 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:15:16.390753  440243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:15:16.409048  440243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:15:16.429312  440243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:15:16.516189  440243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:15:16.660379  440243 docker.go:234] disabling docker service ...
	I1120 21:15:16.660456  440243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:15:16.681209  440243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:15:16.695366  440243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:15:16.877086  440243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:15:16.967329  440243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:15:16.980604  440243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:15:16.995180  440243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:15:16.995259  440243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.090450  440243 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:15:17.090509  440243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.186677  440243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.264914  440243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.275660  440243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:15:17.284793  440243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.409738  440243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.448736  440243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:17.572675  440243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:15:17.580717  440243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:15:17.589632  440243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:15:17.675788  440243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:15:18.127335  440243 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:15:18.127401  440243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:15:18.132281  440243 start.go:564] Will wait 60s for crictl version
	I1120 21:15:18.132342  440243 ssh_runner.go:195] Run: which crictl
	I1120 21:15:18.137240  440243 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:15:18.172649  440243 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:15:18.172896  440243 ssh_runner.go:195] Run: crio --version
	I1120 21:15:18.211074  440243 ssh_runner.go:195] Run: crio --version
	I1120 21:15:13.811278  440643 provision.go:177] copyRemoteCerts
	I1120 21:15:13.811352  440643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:15:13.811401  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:13.843425  440643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa Username:docker}
	I1120 21:15:13.945815  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1120 21:15:13.945874  440643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1120 21:15:13.969691  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1120 21:15:13.969761  440643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:15:13.990760  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1120 21:15:13.990840  440643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:15:14.011413  440643 provision.go:87] duration metric: took 616.470677ms to configureAuth
	I1120 21:15:14.011448  440643 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:15:14.011617  440643 config.go:182] Loaded profile config "NoKubernetes-806709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1120 21:15:14.011842  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:14.037023  440643 main.go:143] libmachine: Using SSH client type: native
	I1120 21:15:14.037364  440643 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1120 21:15:14.037392  440643 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:15:14.362462  440643 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:15:14.362494  440643 machine.go:97] duration metric: took 1.577461077s to provisionDockerMachine
	I1120 21:15:14.362507  440643 client.go:176] duration metric: took 5.351247854s to LocalClient.Create
	I1120 21:15:14.362533  440643 start.go:167] duration metric: took 5.351330267s to libmachine.API.Create "NoKubernetes-806709"
	I1120 21:15:14.362548  440643 start.go:293] postStartSetup for "NoKubernetes-806709" (driver="docker")
	I1120 21:15:14.362569  440643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:15:14.362665  440643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:15:14.362723  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:14.384348  440643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa Username:docker}
	I1120 21:15:14.490501  440643 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:15:14.494673  440643 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:15:14.494703  440643 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:15:14.494717  440643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:15:14.494773  440643 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:15:14.494845  440643 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:15:14.494866  440643 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> /etc/ssl/certs/2540942.pem
	I1120 21:15:14.494979  440643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:15:14.504297  440643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:15:14.534975  440643 start.go:296] duration metric: took 172.407137ms for postStartSetup
	I1120 21:15:14.535426  440643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-806709
	I1120 21:15:14.556671  440643 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/NoKubernetes-806709/config.json ...
	I1120 21:15:14.557020  440643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:15:14.557081  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:14.578434  440643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa Username:docker}
	I1120 21:15:14.672695  440643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:15:14.677802  440643 start.go:128] duration metric: took 5.669552195s to createHost
	I1120 21:15:14.677824  440643 start.go:83] releasing machines lock for "NoKubernetes-806709", held for 5.669690178s
	I1120 21:15:14.677898  440643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-806709
	I1120 21:15:14.699635  440643 ssh_runner.go:195] Run: cat /version.json
	I1120 21:15:14.699704  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:14.699708  440643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:15:14.699782  440643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-806709
	I1120 21:15:14.721568  440643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa Username:docker}
	I1120 21:15:14.722302  440643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/NoKubernetes-806709/id_rsa Username:docker}
	I1120 21:15:14.818298  440643 ssh_runner.go:195] Run: systemctl --version
	I1120 21:15:14.891187  440643 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:15:14.928587  440643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:15:14.934236  440643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:15:14.934308  440643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:15:14.964829  440643 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:15:14.964864  440643 start.go:496] detecting cgroup driver to use...
	I1120 21:15:14.964902  440643 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:15:14.964967  440643 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:15:14.982279  440643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:15:14.996299  440643 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:15:14.996369  440643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:15:15.015878  440643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:15:15.039270  440643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:15:15.145726  440643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:15:15.245347  440643 docker.go:234] disabling docker service ...
	I1120 21:15:15.245416  440643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:15:15.274730  440643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:15:15.290357  440643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:15:15.403098  440643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:15:15.524338  440643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:15:15.539422  440643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:15:15.556255  440643 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1120 21:15:15.556305  440643 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1120 21:15:15.556353  440643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:15.568775  440643 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:15:15.568841  440643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:15.577995  440643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:15.588512  440643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:15:15.598835  440643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:15:15.610254  440643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:15:15.619836  440643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:15:15.630601  440643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:15:15.731989  440643 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:15:18.124303  440643 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.392276446s)
	I1120 21:15:18.124334  440643 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:15:18.124387  440643 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:15:18.129544  440643 start.go:564] Will wait 60s for crictl version
	I1120 21:15:18.129607  440643 ssh_runner.go:195] Run: which crictl
	I1120 21:15:18.134106  440643 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:15:18.174198  440643 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:15:18.174303  440643 ssh_runner.go:195] Run: crio --version
	I1120 21:15:18.212713  440643 ssh_runner.go:195] Run: crio --version
	I1120 21:15:18.248162  440243 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:15:18.249203  440643 out.go:179] * Preparing CRI-O 1.34.2 ...
	I1120 21:15:18.251399  440643 ssh_runner.go:195] Run: rm -f paused
	I1120 21:15:18.257119  440643 out.go:179] * Done! minikube is ready without Kubernetes!
	I1120 21:15:18.259923  440643 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.030242395Z" level=info msg="RDT not available in the host system"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.030260417Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.031264507Z" level=info msg="Conmon does support the --sync option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.031303233Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.031319041Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.032267287Z" level=info msg="Conmon does support the --sync option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.032284337Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.036889357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.036919786Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.037738435Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.039504952Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.040193316Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136034408Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-r6qfd Namespace:kube-system ID:15e0f3298e67c167f2e3bd8fc6eff6a4eb060a5575e72a43cd2571b7dcd43572 UID:fbadec5c-64a9-4074-8974-ee57a834a726 NetNS:/var/run/netns/dd1da3bd-195f-4a13-a1ee-2354a688d4c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008d85c8}] Aliases:map[]}"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136274403Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-r6qfd for CNI network kindnet (type=ptp)"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136789131Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136811414Z" level=info msg="Starting seccomp notifier watcher"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136875762Z" level=info msg="Create NRI interface"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137023933Z" level=info msg="built-in NRI default validator is disabled"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137040191Z" level=info msg="runtime interface created"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137050037Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137054946Z" level=info msg="runtime interface starting up..."
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137061035Z" level=info msg="starting plugins..."
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137073515Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137407725Z" level=info msg="No systemd watchdog enabled"
	Nov 20 21:15:14 pause-643572 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	27e8dac1f2fb4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   15e0f3298e67c       coredns-66bc5c9577-r6qfd               kube-system
	ccfd8a18718c8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   9b9ca7c40ac24       kube-proxy-swvst                       kube-system
	e3147a2781c7b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   c44bb1a08cb05       kindnet-hgd2l                          kube-system
	fd8d921236a2b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   d6413ad57385b       etcd-pause-643572                      kube-system
	5e669533d87d0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   b09c09511f176       kube-controller-manager-pause-643572   kube-system
	209b36591750a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   8d98d5a16b988       kube-scheduler-pause-643572            kube-system
	7a37e9e48eb90       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   1dceb026533b2       kube-apiserver-pause-643572            kube-system
	
	
	==> coredns [27e8dac1f2fb4271263fc533fd41e1cf38812b3ed638df73636d8b7cd15d90a4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48307 - 56754 "HINFO IN 3464365375502177551.4633506263052224490. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020910974s
	
	
	==> describe nodes <==
	Name:               pause-643572
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-643572
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=pause-643572
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_14_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:14:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-643572
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:15:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:14:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:14:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:14:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:15:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-643572
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                449b4ad6-8aec-403f-b7d9-9a471d314d7b
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r6qfd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-643572                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-hgd2l                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-643572             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-643572    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-swvst                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-643572             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-643572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-643572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-643572 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-643572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-643572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-643572 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-643572 event: Registered Node pause-643572 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-643572 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	[Nov20 20:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.053479] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023936] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +2.047762] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +4.031673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +8.127416] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[ +16.382740] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 20:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	
	
	==> etcd [fd8d921236a2b699217abac74bc21b3fe80b439dba9372df6bd7f7926be275a8] <==
	{"level":"warn","ts":"2025-11-20T21:14:46.194684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.206098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.216939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.226562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.236363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.246415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.254122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.262605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.274356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.283420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.290244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.303585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.314285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.324227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.336482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.342667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.351292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.365444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.373805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.382696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.398999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.408879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.417628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.490161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:15:10.403946Z","caller":"traceutil/trace.go:172","msg":"trace[1112039072] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"135.533982ms","start":"2025-11-20T21:15:10.268392Z","end":"2025-11-20T21:15:10.403926Z","steps":["trace[1112039072] 'process raft request'  (duration: 135.412437ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:15:20 up  3:57,  0 user,  load average: 5.25, 2.24, 1.37
	Linux pause-643572 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3147a2781c7b62124a989a5b0c7e75dd0088c0ebeae242c1b0620793a49c175] <==
	I1120 21:14:55.902513       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:14:55.906791       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:14:55.907040       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:14:55.907087       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:14:55.907146       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:14:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:14:56.160077       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:14:56.160154       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:14:56.160175       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:14:56.160383       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:14:56.799267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:14:56.799294       1 metrics.go:72] Registering metrics
	I1120 21:14:56.799340       1 controller.go:711] "Syncing nftables rules"
	I1120 21:15:06.161029       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:15:06.161085       1 main.go:301] handling current node
	I1120 21:15:16.164323       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:15:16.164374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a37e9e48eb909697c853bc7da4f5c7c3b725e66987c122f5fbb4400e359b1e4] <==
	I1120 21:14:47.293821       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1120 21:14:47.294027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:14:47.296414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:14:47.298790       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:14:47.298876       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:14:47.306698       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:14:47.307430       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:14:47.309832       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:14:48.180022       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:14:48.184284       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:14:48.184306       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:14:48.707265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:14:48.749631       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:14:48.887154       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:14:48.893684       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:14:48.894911       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:14:48.899526       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:14:49.239375       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:14:49.820771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:14:49.829318       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:14:49.836706       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:14:54.892361       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:14:55.140900       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:14:55.292815       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:14:55.296731       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5e669533d87d058607e7b14b83038db9b6d970da0609f437c0d5f4da3bf93e74] <==
	I1120 21:14:54.240338       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:14:54.240394       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:14:54.240468       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:14:54.242768       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:14:54.243966       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:14:54.244048       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:14:54.244110       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:14:54.244164       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:14:54.244091       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:14:54.244386       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:14:54.244396       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:14:54.244404       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:14:54.246987       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:14:54.246985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:14:54.248466       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:14:54.250015       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:14:54.250117       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:14:54.251514       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-643572" podCIDRs=["10.244.0.0/24"]
	I1120 21:14:54.253537       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:14:54.260960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:14:54.262125       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:14:54.263144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:14:54.263193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:14:54.266571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:15:09.190585       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ccfd8a18718c8944d736692e05fab028329a4505100981326f1f51098eb64249] <==
	I1120 21:14:55.564478       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:14:55.631949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:14:55.732469       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:14:55.732520       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:14:55.732731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:14:55.764662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:14:55.764721       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:14:55.771148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:14:55.771605       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:14:55.771683       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:14:55.776332       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:14:55.776368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:14:55.776639       1 config.go:200] "Starting service config controller"
	I1120 21:14:55.776665       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:14:55.776781       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:14:55.776826       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:14:55.777660       1 config.go:309] "Starting node config controller"
	I1120 21:14:55.777692       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:14:55.777700       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:14:55.876726       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:14:55.876769       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:14:55.877041       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [209b36591750a9245aaa6154d1de0f09139a7d6d2fa0bece02bfda3404624a2f] <==
	E1120 21:14:47.286844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:14:47.286907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:14:47.286970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:14:47.287027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:14:47.287090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:14:47.290675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:14:47.291092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:14:47.297484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:14:47.297596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:14:47.297617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:14:47.297588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:14:47.297484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:14:47.297753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:14:47.297906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:14:48.101906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:14:48.175162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:14:48.206571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:14:48.208541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:14:48.223612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:14:48.283618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:14:48.365821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:14:48.369914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:14:48.452442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:14:48.745885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 21:14:50.580395       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192426    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vzp9\" (UniqueName: \"kubernetes.io/projected/dbe57678-242c-461f-8fb9-ab0f9f330549-kube-api-access-4vzp9\") pod \"kindnet-hgd2l\" (UID: \"dbe57678-242c-461f-8fb9-ab0f9f330549\") " pod="kube-system/kindnet-hgd2l"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192452    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/588fdc04-3336-4f17-8703-6d406b356e59-lib-modules\") pod \"kube-proxy-swvst\" (UID: \"588fdc04-3336-4f17-8703-6d406b356e59\") " pod="kube-system/kube-proxy-swvst"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192582    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbe57678-242c-461f-8fb9-ab0f9f330549-xtables-lock\") pod \"kindnet-hgd2l\" (UID: \"dbe57678-242c-461f-8fb9-ab0f9f330549\") " pod="kube-system/kindnet-hgd2l"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192604    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/588fdc04-3336-4f17-8703-6d406b356e59-kube-proxy\") pod \"kube-proxy-swvst\" (UID: \"588fdc04-3336-4f17-8703-6d406b356e59\") " pod="kube-system/kube-proxy-swvst"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192710    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hc5m\" (UniqueName: \"kubernetes.io/projected/588fdc04-3336-4f17-8703-6d406b356e59-kube-api-access-6hc5m\") pod \"kube-proxy-swvst\" (UID: \"588fdc04-3336-4f17-8703-6d406b356e59\") " pod="kube-system/kube-proxy-swvst"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.726961    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-swvst" podStartSLOduration=0.726941557 podStartE2EDuration="726.941557ms" podCreationTimestamp="2025-11-20 21:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:55.726867858 +0000 UTC m=+6.142523447" watchObservedRunningTime="2025-11-20 21:14:55.726941557 +0000 UTC m=+6.142597147"
	Nov 20 21:14:57 pause-643572 kubelet[1336]: I1120 21:14:57.403880    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hgd2l" podStartSLOduration=2.4038607069999998 podStartE2EDuration="2.403860707s" podCreationTimestamp="2025-11-20 21:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:55.739150168 +0000 UTC m=+6.154805757" watchObservedRunningTime="2025-11-20 21:14:57.403860707 +0000 UTC m=+7.819516296"
	Nov 20 21:15:06 pause-643572 kubelet[1336]: I1120 21:15:06.384968    1336 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:15:06 pause-643572 kubelet[1336]: I1120 21:15:06.477000    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbadec5c-64a9-4074-8974-ee57a834a726-config-volume\") pod \"coredns-66bc5c9577-r6qfd\" (UID: \"fbadec5c-64a9-4074-8974-ee57a834a726\") " pod="kube-system/coredns-66bc5c9577-r6qfd"
	Nov 20 21:15:06 pause-643572 kubelet[1336]: I1120 21:15:06.477062    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjlr\" (UniqueName: \"kubernetes.io/projected/fbadec5c-64a9-4074-8974-ee57a834a726-kube-api-access-dpjlr\") pod \"coredns-66bc5c9577-r6qfd\" (UID: \"fbadec5c-64a9-4074-8974-ee57a834a726\") " pod="kube-system/coredns-66bc5c9577-r6qfd"
	Nov 20 21:15:07 pause-643572 kubelet[1336]: I1120 21:15:07.783017    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r6qfd" podStartSLOduration=12.782986742 podStartE2EDuration="12.782986742s" podCreationTimestamp="2025-11-20 21:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:15:07.763916799 +0000 UTC m=+18.179572401" watchObservedRunningTime="2025-11-20 21:15:07.782986742 +0000 UTC m=+18.198642331"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: W1120 21:15:11.682364    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682559    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682707    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682727    1336 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682739    1336 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.752836    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.752891    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.752904    1336 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: W1120 21:15:11.783114    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: W1120 21:15:11.916363    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 20 21:15:18 pause-643572 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:15:18 pause-643572 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:15:18 pause-643572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:15:18 pause-643572 systemd[1]: kubelet.service: Consumed 1.266s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-643572 -n pause-643572
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-643572 -n pause-643572: exit status 2 (369.709095ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-643572 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-643572
helpers_test.go:243: (dbg) docker inspect pause-643572:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4",
	        "Created": "2025-11-20T21:14:30.981194784Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:14:31.030338057Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/hosts",
	        "LogPath": "/var/lib/docker/containers/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4/dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4-json.log",
	        "Name": "/pause-643572",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-643572:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-643572",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dbe3632f021c707437a298800212d93e869b56c8fe7be8dd4fe6d8feddf74df4",
	                "LowerDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e16c79e8bad113ebb227f49fb46a5b08a6a5eb555306d559fb5bc512974c5229/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-643572",
	                "Source": "/var/lib/docker/volumes/pause-643572/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-643572",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-643572",
	                "name.minikube.sigs.k8s.io": "pause-643572",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d457fac682eeaaa3a2a8b6834f8890ea90674a05734462eb429822e146c5d4ff",
	            "SandboxKey": "/var/run/docker/netns/d457fac682ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-643572": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e4fcb374cc0c4ac1a66f944635319c2747cf3cfbcad77203adc396148b3ef983",
	                    "EndpointID": "4826255bdf02dcd46bc39b50e012e640a25cb8aa42f330a5477ceb91ed1662c0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "72:01:8d:46:7f:01",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-643572",
	                        "dbe3632f021c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-643572 -n pause-643572
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-643572 -n pause-643572: exit status 2 (402.537482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-643572 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-643572 logs -n 25: (1.105837918s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-936763 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cri-dockerd --version                                                                                 │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl cat containerd --no-pager                                                                   │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo cat /etc/containerd/config.toml                                                                       │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo containerd config dump                                                                                │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo systemctl cat crio --no-pager                                                                         │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ ssh     │ -p cilium-936763 sudo crio config                                                                                           │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │                     │
	│ delete  │ -p cilium-936763                                                                                                            │ cilium-936763             │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:14 UTC │
	│ start   │ -p force-systemd-env-267271 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-267271  │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:14 UTC │ 20 Nov 25 21:15 UTC │
	│ delete  │ -p force-systemd-env-267271                                                                                                 │ force-systemd-env-267271  │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ delete  │ -p NoKubernetes-806709                                                                                                      │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p force-systemd-flag-687992 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-687992 │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ start   │ -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ delete  │ -p offline-crio-735987                                                                                                      │ offline-crio-735987       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p pause-643572 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-643572              │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p cert-expiration-118194 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-118194    │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ pause   │ -p pause-643572 --alsologtostderr -v=5                                                                                      │ pause-643572              │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ ssh     │ -p NoKubernetes-806709 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	│ stop    │ -p NoKubernetes-806709                                                                                                      │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │ 20 Nov 25 21:15 UTC │
	│ start   │ -p NoKubernetes-806709 --driver=docker  --container-runtime=crio                                                            │ NoKubernetes-806709       │ jenkins │ v1.37.0 │ 20 Nov 25 21:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:15:22
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:15:22.154094  448539 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:15:22.154453  448539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:15:22.154459  448539 out.go:374] Setting ErrFile to fd 2...
	I1120 21:15:22.154464  448539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:15:22.154792  448539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:15:22.155357  448539 out.go:368] Setting JSON to false
	I1120 21:15:22.156618  448539 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14264,"bootTime":1763659058,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:15:22.156675  448539 start.go:143] virtualization: kvm guest
	I1120 21:15:22.159012  448539 out.go:179] * [NoKubernetes-806709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:15:22.160398  448539 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:15:22.160396  448539 notify.go:221] Checking for updates...
	I1120 21:15:22.163795  448539 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:15:22.165502  448539 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:15:22.167235  448539 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:15:22.168632  448539 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:15:22.170007  448539 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.030242395Z" level=info msg="RDT not available in the host system"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.030260417Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.031264507Z" level=info msg="Conmon does support the --sync option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.031303233Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.031319041Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.032267287Z" level=info msg="Conmon does support the --sync option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.032284337Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.036889357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.036919786Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.037738435Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.039504952Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.040193316Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136034408Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-r6qfd Namespace:kube-system ID:15e0f3298e67c167f2e3bd8fc6eff6a4eb060a5575e72a43cd2571b7dcd43572 UID:fbadec5c-64a9-4074-8974-ee57a834a726 NetNS:/var/run/netns/dd1da3bd-195f-4a13-a1ee-2354a688d4c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008d85c8}] Aliases:map[]}"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136274403Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-r6qfd for CNI network kindnet (type=ptp)"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136789131Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136811414Z" level=info msg="Starting seccomp notifier watcher"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.136875762Z" level=info msg="Create NRI interface"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137023933Z" level=info msg="built-in NRI default validator is disabled"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137040191Z" level=info msg="runtime interface created"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137050037Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137054946Z" level=info msg="runtime interface starting up..."
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137061035Z" level=info msg="starting plugins..."
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137073515Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 20 21:15:14 pause-643572 crio[2188]: time="2025-11-20T21:15:14.137407725Z" level=info msg="No systemd watchdog enabled"
	Nov 20 21:15:14 pause-643572 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	27e8dac1f2fb4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   15e0f3298e67c       coredns-66bc5c9577-r6qfd               kube-system
	ccfd8a18718c8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   9b9ca7c40ac24       kube-proxy-swvst                       kube-system
	e3147a2781c7b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   c44bb1a08cb05       kindnet-hgd2l                          kube-system
	fd8d921236a2b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   38 seconds ago      Running             etcd                      0                   d6413ad57385b       etcd-pause-643572                      kube-system
	5e669533d87d0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   38 seconds ago      Running             kube-controller-manager   0                   b09c09511f176       kube-controller-manager-pause-643572   kube-system
	209b36591750a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   38 seconds ago      Running             kube-scheduler            0                   8d98d5a16b988       kube-scheduler-pause-643572            kube-system
	7a37e9e48eb90       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   38 seconds ago      Running             kube-apiserver            0                   1dceb026533b2       kube-apiserver-pause-643572            kube-system
	
	
	==> coredns [27e8dac1f2fb4271263fc533fd41e1cf38812b3ed638df73636d8b7cd15d90a4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48307 - 56754 "HINFO IN 3464365375502177551.4633506263052224490. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020910974s
	
	
	==> describe nodes <==
	Name:               pause-643572
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-643572
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=pause-643572
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_14_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:14:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-643572
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:15:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:14:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:14:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:14:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:15:10 +0000   Thu, 20 Nov 2025 21:15:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-643572
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                449b4ad6-8aec-403f-b7d9-9a471d314d7b
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r6qfd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-643572                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-hgd2l                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-643572             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-643572    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-swvst                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-643572             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node pause-643572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node pause-643572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node pause-643572 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node pause-643572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node pause-643572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node pause-643572 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node pause-643572 event: Registered Node pause-643572 in Controller
	  Normal  NodeReady                16s                kubelet          Node pause-643572 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 80 dd 1f 3c 89 08 06
	[Nov20 20:23] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 82 3d 59 ac fa 08 06
	[Nov20 20:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.053479] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023936] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +2.047762] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +4.031673] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[  +8.127416] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[ +16.382740] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 20:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	
	
	==> etcd [fd8d921236a2b699217abac74bc21b3fe80b439dba9372df6bd7f7926be275a8] <==
	{"level":"warn","ts":"2025-11-20T21:14:46.194684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.206098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.216939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.226562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.236363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.246415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.254122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.262605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.274356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.283420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.290244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.303585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.314285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.324227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.336482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.342667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.351292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.365444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.373805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.382696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.398999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.408879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.417628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:14:46.490161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:15:10.403946Z","caller":"traceutil/trace.go:172","msg":"trace[1112039072] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"135.533982ms","start":"2025-11-20T21:15:10.268392Z","end":"2025-11-20T21:15:10.403926Z","steps":["trace[1112039072] 'process raft request'  (duration: 135.412437ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:15:23 up  3:57,  0 user,  load average: 5.15, 2.27, 1.38
	Linux pause-643572 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3147a2781c7b62124a989a5b0c7e75dd0088c0ebeae242c1b0620793a49c175] <==
	I1120 21:14:55.902513       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:14:55.906791       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:14:55.907040       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:14:55.907087       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:14:55.907146       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:14:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:14:56.160077       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:14:56.160154       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:14:56.160175       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:14:56.160383       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:14:56.799267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:14:56.799294       1 metrics.go:72] Registering metrics
	I1120 21:14:56.799340       1 controller.go:711] "Syncing nftables rules"
	I1120 21:15:06.161029       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:15:06.161085       1 main.go:301] handling current node
	I1120 21:15:16.164323       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:15:16.164374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a37e9e48eb909697c853bc7da4f5c7c3b725e66987c122f5fbb4400e359b1e4] <==
	I1120 21:14:47.293821       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1120 21:14:47.294027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:14:47.296414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:14:47.298790       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:14:47.298876       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:14:47.306698       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:14:47.307430       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:14:47.309832       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:14:48.180022       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:14:48.184284       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:14:48.184306       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:14:48.707265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:14:48.749631       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:14:48.887154       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:14:48.893684       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:14:48.894911       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:14:48.899526       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:14:49.239375       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:14:49.820771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:14:49.829318       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:14:49.836706       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:14:54.892361       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:14:55.140900       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:14:55.292815       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:14:55.296731       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5e669533d87d058607e7b14b83038db9b6d970da0609f437c0d5f4da3bf93e74] <==
	I1120 21:14:54.240338       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:14:54.240394       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:14:54.240468       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:14:54.242768       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:14:54.243966       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:14:54.244048       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:14:54.244110       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:14:54.244164       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:14:54.244091       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:14:54.244386       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:14:54.244396       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:14:54.244404       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:14:54.246987       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:14:54.246985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:14:54.248466       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:14:54.250015       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:14:54.250117       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:14:54.251514       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-643572" podCIDRs=["10.244.0.0/24"]
	I1120 21:14:54.253537       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:14:54.260960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:14:54.262125       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:14:54.263144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:14:54.263193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:14:54.266571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:15:09.190585       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ccfd8a18718c8944d736692e05fab028329a4505100981326f1f51098eb64249] <==
	I1120 21:14:55.564478       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:14:55.631949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:14:55.732469       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:14:55.732520       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:14:55.732731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:14:55.764662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:14:55.764721       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:14:55.771148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:14:55.771605       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:14:55.771683       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:14:55.776332       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:14:55.776368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:14:55.776639       1 config.go:200] "Starting service config controller"
	I1120 21:14:55.776665       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:14:55.776781       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:14:55.776826       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:14:55.777660       1 config.go:309] "Starting node config controller"
	I1120 21:14:55.777692       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:14:55.777700       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:14:55.876726       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:14:55.876769       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:14:55.877041       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [209b36591750a9245aaa6154d1de0f09139a7d6d2fa0bece02bfda3404624a2f] <==
	E1120 21:14:47.286844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:14:47.286907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:14:47.286970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:14:47.287027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:14:47.287090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:14:47.290675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:14:47.291092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:14:47.297484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:14:47.297596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:14:47.297617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:14:47.297588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:14:47.297484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:14:47.297753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:14:47.297906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:14:48.101906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:14:48.175162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:14:48.206571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:14:48.208541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:14:48.223612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:14:48.283618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:14:48.365821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:14:48.369914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:14:48.452442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:14:48.745885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 21:14:50.580395       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192426    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vzp9\" (UniqueName: \"kubernetes.io/projected/dbe57678-242c-461f-8fb9-ab0f9f330549-kube-api-access-4vzp9\") pod \"kindnet-hgd2l\" (UID: \"dbe57678-242c-461f-8fb9-ab0f9f330549\") " pod="kube-system/kindnet-hgd2l"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192452    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/588fdc04-3336-4f17-8703-6d406b356e59-lib-modules\") pod \"kube-proxy-swvst\" (UID: \"588fdc04-3336-4f17-8703-6d406b356e59\") " pod="kube-system/kube-proxy-swvst"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192582    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbe57678-242c-461f-8fb9-ab0f9f330549-xtables-lock\") pod \"kindnet-hgd2l\" (UID: \"dbe57678-242c-461f-8fb9-ab0f9f330549\") " pod="kube-system/kindnet-hgd2l"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192604    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/588fdc04-3336-4f17-8703-6d406b356e59-kube-proxy\") pod \"kube-proxy-swvst\" (UID: \"588fdc04-3336-4f17-8703-6d406b356e59\") " pod="kube-system/kube-proxy-swvst"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.192710    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hc5m\" (UniqueName: \"kubernetes.io/projected/588fdc04-3336-4f17-8703-6d406b356e59-kube-api-access-6hc5m\") pod \"kube-proxy-swvst\" (UID: \"588fdc04-3336-4f17-8703-6d406b356e59\") " pod="kube-system/kube-proxy-swvst"
	Nov 20 21:14:55 pause-643572 kubelet[1336]: I1120 21:14:55.726961    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-swvst" podStartSLOduration=0.726941557 podStartE2EDuration="726.941557ms" podCreationTimestamp="2025-11-20 21:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:55.726867858 +0000 UTC m=+6.142523447" watchObservedRunningTime="2025-11-20 21:14:55.726941557 +0000 UTC m=+6.142597147"
	Nov 20 21:14:57 pause-643572 kubelet[1336]: I1120 21:14:57.403880    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hgd2l" podStartSLOduration=2.4038607069999998 podStartE2EDuration="2.403860707s" podCreationTimestamp="2025-11-20 21:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:14:55.739150168 +0000 UTC m=+6.154805757" watchObservedRunningTime="2025-11-20 21:14:57.403860707 +0000 UTC m=+7.819516296"
	Nov 20 21:15:06 pause-643572 kubelet[1336]: I1120 21:15:06.384968    1336 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:15:06 pause-643572 kubelet[1336]: I1120 21:15:06.477000    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbadec5c-64a9-4074-8974-ee57a834a726-config-volume\") pod \"coredns-66bc5c9577-r6qfd\" (UID: \"fbadec5c-64a9-4074-8974-ee57a834a726\") " pod="kube-system/coredns-66bc5c9577-r6qfd"
	Nov 20 21:15:06 pause-643572 kubelet[1336]: I1120 21:15:06.477062    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjlr\" (UniqueName: \"kubernetes.io/projected/fbadec5c-64a9-4074-8974-ee57a834a726-kube-api-access-dpjlr\") pod \"coredns-66bc5c9577-r6qfd\" (UID: \"fbadec5c-64a9-4074-8974-ee57a834a726\") " pod="kube-system/coredns-66bc5c9577-r6qfd"
	Nov 20 21:15:07 pause-643572 kubelet[1336]: I1120 21:15:07.783017    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r6qfd" podStartSLOduration=12.782986742 podStartE2EDuration="12.782986742s" podCreationTimestamp="2025-11-20 21:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:15:07.763916799 +0000 UTC m=+18.179572401" watchObservedRunningTime="2025-11-20 21:15:07.782986742 +0000 UTC m=+18.198642331"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: W1120 21:15:11.682364    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682559    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682707    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682727    1336 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.682739    1336 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.752836    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.752891    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: E1120 21:15:11.752904    1336 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 20 21:15:11 pause-643572 kubelet[1336]: W1120 21:15:11.783114    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 20 21:15:11 pause-643572 kubelet[1336]: W1120 21:15:11.916363    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 20 21:15:18 pause-643572 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:15:18 pause-643572 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:15:18 pause-643572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:15:18 pause-643572 systemd[1]: kubelet.service: Consumed 1.266s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-643572 -n pause-643572
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-643572 -n pause-643572: exit status 2 (352.389517ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-643572 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-936214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-936214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (343.763646ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:21:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-936214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-936214 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-936214 describe deploy/metrics-server -n kube-system: exit status 1 (93.319262ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-936214 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-936214
helpers_test.go:243: (dbg) docker inspect old-k8s-version-936214:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d",
	        "Created": "2025-11-20T21:20:38.133542071Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 530283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:20:38.24110291Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/hosts",
	        "LogPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d-json.log",
	        "Name": "/old-k8s-version-936214",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-936214:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-936214",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d",
	                "LowerDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-936214",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-936214/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-936214",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-936214",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-936214",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3d6164a0d1e580f335a56b67eb60ee85bf0a11a0fe6acc58e4e5aa6f9fb15d8c",
	            "SandboxKey": "/var/run/docker/netns/3d6164a0d1e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-936214": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5b009581e5fe97051a52995f889c213d44d34cc774e441d6eb45e5a9ea52ad6",
	                    "EndpointID": "d8dbd69ff39bed516d69eb89412f41d4bbd9eeacc6771ab39adf4f874808450c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ea:6e:99:b0:fb:56",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-936214",
	                        "6dcf9965a656"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-936214 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-936214 logs -n 25: (1.204782435s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-936763 sudo systemctl cat crio --no-pager                                                                                                    │ calico-936763          │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p calico-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                          │ calico-936763          │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p calico-936763 sudo crio config                                                                                                                      │ calico-936763          │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p calico-936763                                                                                                                                       │ calico-936763          │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/nsswitch.conf                                                                                                   │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/hosts                                                                                                           │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/resolv.conf                                                                                                     │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-714571     │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo crictl pods                                                                                                              │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo crictl ps --all                                                                                                          │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                   │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo ip a s                                                                                                                   │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo ip r s                                                                                                                   │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo iptables-save                                                                                                            │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo iptables -t nat -L -n -v                                                                                                 │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /run/flannel/subnet.env                                                                                              │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/kube-flannel/cni-conf.json                                                                                      │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status kubelet --all --full --no-pager                                                                         │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat kubelet --no-pager                                                                                         │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo journalctl -xeu kubelet --all --full --no-pager                                                                          │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/kubernetes/kubelet.conf                                                                                         │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /var/lib/kubelet/config.yaml                                                                                         │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-936214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-936214 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status docker --all --full --no-pager                                                                          │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat docker --no-pager                                                                                          │ custom-flannel-936763  │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:21:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:21:28.970513  545013 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:21:28.970672  545013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:28.970688  545013 out.go:374] Setting ErrFile to fd 2...
	I1120 21:21:28.970696  545013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:28.971007  545013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:21:28.971675  545013 out.go:368] Setting JSON to false
	I1120 21:21:28.973546  545013 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14631,"bootTime":1763659058,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:21:28.973677  545013 start.go:143] virtualization: kvm guest
	I1120 21:21:28.975796  545013 out.go:179] * [embed-certs-714571] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:21:28.977405  545013 notify.go:221] Checking for updates...
	I1120 21:21:28.977435  545013 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:21:28.978785  545013 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:21:28.980080  545013 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:21:28.981383  545013 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:21:28.982736  545013 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:21:28.984024  545013 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:21:28.986311  545013 config.go:182] Loaded profile config "custom-flannel-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:28.986502  545013 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:28.986628  545013 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:21:28.986772  545013 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:21:29.016296  545013 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:21:29.016432  545013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:21:29.085675  545013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-20 21:21:29.074100952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:21:29.085802  545013 docker.go:319] overlay module found
	I1120 21:21:29.087541  545013 out.go:179] * Using the docker driver based on user configuration
	I1120 21:21:29.088644  545013 start.go:309] selected driver: docker
	I1120 21:21:29.088663  545013 start.go:930] validating driver "docker" against <nil>
	I1120 21:21:29.088685  545013 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:21:29.089462  545013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:21:29.162190  545013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-20 21:21:29.150473322 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:21:29.162453  545013 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 21:21:29.162716  545013 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:21:29.164275  545013 out.go:179] * Using Docker driver with root privileges
	I1120 21:21:29.165356  545013 cni.go:84] Creating CNI manager for ""
	I1120 21:21:29.165420  545013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:21:29.165431  545013 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:21:29.165513  545013 start.go:353] cluster config:
	{Name:embed-certs-714571 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-714571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:29.166808  545013 out.go:179] * Starting "embed-certs-714571" primary control-plane node in "embed-certs-714571" cluster
	I1120 21:21:29.167899  545013 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:21:29.168997  545013 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:21:29.170069  545013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:21:29.170110  545013 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:21:29.170122  545013 cache.go:65] Caching tarball of preloaded images
	I1120 21:21:29.170233  545013 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:21:29.170307  545013 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:21:29.170324  545013 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:21:29.170445  545013 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/embed-certs-714571/config.json ...
	I1120 21:21:29.170480  545013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/embed-certs-714571/config.json: {Name:mkc3b6506cf3309ad85ed5ff01bbde0cdb0e2485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:29.199674  545013 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:21:29.199697  545013 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:21:29.199715  545013 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:21:29.199748  545013 start.go:360] acquireMachinesLock for embed-certs-714571: {Name:mk8885aae490c37ea0ba45f0ab1731ad4142e758 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:21:29.199841  545013 start.go:364] duration metric: took 76.848µs to acquireMachinesLock for "embed-certs-714571"
	I1120 21:21:29.199862  545013 start.go:93] Provisioning new machine with config: &{Name:embed-certs-714571 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-714571 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:21:29.199930  545013 start.go:125] createHost starting for "" (driver="docker")
	I1120 21:21:27.104293  534759 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:21:27.109497  534759 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:21:27.109521  534759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:21:27.123515  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:21:27.343737  534759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:21:27.343823  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:27.343852  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-166874 minikube.k8s.io/updated_at=2025_11_20T21_21_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=no-preload-166874 minikube.k8s.io/primary=true
	I1120 21:21:27.355335  534759 ops.go:34] apiserver oom_adj: -16
	I1120 21:21:27.425344  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:27.926041  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:28.425365  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:28.925485  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:29.425598  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:29.926161  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:30.426450  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:30.925446  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:31.425812  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:31.926298  534759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:32.016452  534759 kubeadm.go:1114] duration metric: took 4.672695239s to wait for elevateKubeSystemPrivileges
	I1120 21:21:32.016486  534759 kubeadm.go:403] duration metric: took 17.080533279s to StartCluster
	I1120 21:21:32.016504  534759 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:32.016611  534759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:21:32.018500  534759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:32.018834  534759 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:21:32.018870  534759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:21:32.019292  534759 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:32.019172  534759 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:21:32.019429  534759 addons.go:70] Setting storage-provisioner=true in profile "no-preload-166874"
	I1120 21:21:32.019451  534759 addons.go:239] Setting addon storage-provisioner=true in "no-preload-166874"
	I1120 21:21:32.019513  534759 addons.go:70] Setting default-storageclass=true in profile "no-preload-166874"
	I1120 21:21:32.019542  534759 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-166874"
	I1120 21:21:32.019559  534759 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:21:32.019961  534759 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:21:32.020577  534759 out.go:179] * Verifying Kubernetes components...
	I1120 21:21:32.020852  534759 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:21:32.022485  534759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:21:32.060050  534759 addons.go:239] Setting addon default-storageclass=true in "no-preload-166874"
	I1120 21:21:32.060188  534759 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:21:32.061654  534759 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:21:32.062599  534759 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:21:32.063327  534759 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:21:32.063350  534759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:21:32.063408  534759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:21:32.095260  534759 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:21:32.095366  534759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:21:32.095504  534759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:21:32.099131  534759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:21:32.120624  534759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:21:32.143534  534759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:21:32.196387  534759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:21:32.221514  534759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:21:32.243011  534759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:21:32.336763  534759 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1120 21:21:32.338090  534759 node_ready.go:35] waiting up to 6m0s for node "no-preload-166874" to be "Ready" ...
	I1120 21:21:32.864559  534759 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-166874" context rescaled to 1 replicas
	I1120 21:21:33.336476  534759 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:21:29.201559  545013 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:21:29.201757  545013 start.go:159] libmachine.API.Create for "embed-certs-714571" (driver="docker")
	I1120 21:21:29.201783  545013 client.go:173] LocalClient.Create starting
	I1120 21:21:29.201853  545013 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:21:29.201884  545013 main.go:143] libmachine: Decoding PEM data...
	I1120 21:21:29.201898  545013 main.go:143] libmachine: Parsing certificate...
	I1120 21:21:29.201953  545013 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:21:29.201972  545013 main.go:143] libmachine: Decoding PEM data...
	I1120 21:21:29.201983  545013 main.go:143] libmachine: Parsing certificate...
	I1120 21:21:29.202282  545013 cli_runner.go:164] Run: docker network inspect embed-certs-714571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:21:29.223356  545013 cli_runner.go:211] docker network inspect embed-certs-714571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:21:29.223479  545013 network_create.go:284] running [docker network inspect embed-certs-714571] to gather additional debugging logs...
	I1120 21:21:29.223509  545013 cli_runner.go:164] Run: docker network inspect embed-certs-714571
	W1120 21:21:29.242974  545013 cli_runner.go:211] docker network inspect embed-certs-714571 returned with exit code 1
	I1120 21:21:29.243008  545013 network_create.go:287] error running [docker network inspect embed-certs-714571]: docker network inspect embed-certs-714571: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-714571 not found
	I1120 21:21:29.243041  545013 network_create.go:289] output of [docker network inspect embed-certs-714571]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-714571 not found
	
	** /stderr **
	I1120 21:21:29.243207  545013 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:21:29.266696  545013 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:21:29.268162  545013 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:21:29.270829  545013 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:21:29.271819  545013 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002933b90}
	I1120 21:21:29.271849  545013 network_create.go:124] attempt to create docker network embed-certs-714571 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1120 21:21:29.271901  545013 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-714571 embed-certs-714571
	I1120 21:21:29.338311  545013 network_create.go:108] docker network embed-certs-714571 192.168.76.0/24 created
	I1120 21:21:29.338343  545013 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-714571" container
	I1120 21:21:29.338397  545013 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:21:29.361577  545013 cli_runner.go:164] Run: docker volume create embed-certs-714571 --label name.minikube.sigs.k8s.io=embed-certs-714571 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:21:29.383165  545013 oci.go:103] Successfully created a docker volume embed-certs-714571
	I1120 21:21:29.383262  545013 cli_runner.go:164] Run: docker run --rm --name embed-certs-714571-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-714571 --entrypoint /usr/bin/test -v embed-certs-714571:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:21:29.843708  545013 oci.go:107] Successfully prepared a docker volume embed-certs-714571
	I1120 21:21:29.843776  545013 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:21:29.843792  545013 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:21:29.843861  545013 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-714571:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 20 21:21:23 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:23.484956172Z" level=info msg="Started container" PID=2119 containerID=30d483cf3c948ca6e2ab5bc7f16359e09479713f6231429cc9d6e69cbda7709c description=kube-system/storage-provisioner/storage-provisioner id=1ea9d7ff-d593-4945-a52b-b33da45fe9aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=8bd362dfd01db8b4bf04bc2dd7b269adbdd7b6c46dc789eac2d9fdbb0c90b1a0
	Nov 20 21:21:23 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:23.485698785Z" level=info msg="Started container" PID=2120 containerID=b68c2b76407ab4c86e2495c2e3caf81b64c76f567312f8a81ed45af93305cc6f description=kube-system/coredns-5dd5756b68-5t2cr/coredns id=ee9d4ce9-e0ca-4798-9e50-22531ab19e9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=868846353d4a45bd00f9ebb25b307f9d20b846aeeda83cb0d4ac16e95e22ddba
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.493506906Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1b6714d9-8fa5-4d38-ac8e-782f70a479c9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.493588584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.509206844Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:95322931f5837598b0599a816abffd5c933f597701fde360165a7126f14e7be2 UID:1b53bd6f-5850-4bee-9c34-0ebd759fa96b NetNS:/var/run/netns/b1dcc5b1-2dff-47a9-9be3-003e64d7d6f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000957378}] Aliases:map[]}"
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.509283916Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.525203583Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:95322931f5837598b0599a816abffd5c933f597701fde360165a7126f14e7be2 UID:1b53bd6f-5850-4bee-9c34-0ebd759fa96b NetNS:/var/run/netns/b1dcc5b1-2dff-47a9-9be3-003e64d7d6f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000957378}] Aliases:map[]}"
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.526268001Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.527479457Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.52876232Z" level=info msg="Ran pod sandbox 95322931f5837598b0599a816abffd5c933f597701fde360165a7126f14e7be2 with infra container: default/busybox/POD" id=1b6714d9-8fa5-4d38-ac8e-782f70a479c9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.530170109Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a510de94-0f22-4e59-a770-b5f75e9eb36f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.530408847Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a510de94-0f22-4e59-a770-b5f75e9eb36f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.530493081Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a510de94-0f22-4e59-a770-b5f75e9eb36f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.531028002Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7c5f35ce-f3e9-48aa-84a5-a5707e971307 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:21:26 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:26.533054888Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.533073064Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7c5f35ce-f3e9-48aa-84a5-a5707e971307 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.534046648Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a36208a1-bb0c-4d2c-84af-40147571a0aa name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.535836039Z" level=info msg="Creating container: default/busybox/busybox" id=4bfd2e2d-e223-415a-ae41-c3748c82280d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.535987828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.540273863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.540771817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.593022791Z" level=info msg="Created container c35e981d347a3e437aad0839680735a6d9e10a662556c30fd167782b3a54ee78: default/busybox/busybox" id=4bfd2e2d-e223-415a-ae41-c3748c82280d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.593908996Z" level=info msg="Starting container: c35e981d347a3e437aad0839680735a6d9e10a662556c30fd167782b3a54ee78" id=a1dc7c0c-c42d-42d4-afb9-ec6e61c3ea3d name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:21:28 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:28.596735653Z" level=info msg="Started container" PID=2193 containerID=c35e981d347a3e437aad0839680735a6d9e10a662556c30fd167782b3a54ee78 description=default/busybox/busybox id=a1dc7c0c-c42d-42d4-afb9-ec6e61c3ea3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=95322931f5837598b0599a816abffd5c933f597701fde360165a7126f14e7be2
	Nov 20 21:21:35 old-k8s-version-936214 crio[767]: time="2025-11-20T21:21:35.318454743Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	c35e981d347a3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   95322931f5837       busybox                                          default
	b68c2b76407ab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   868846353d4a4       coredns-5dd5756b68-5t2cr                         kube-system
	30d483cf3c948       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   8bd362dfd01db       storage-provisioner                              kube-system
	7150fb2ba6199       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   0e4a244439a44       kindnet-949k6                                    kube-system
	182dbe340bc86       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   67bbbd740adfa       kube-proxy-z9sk2                                 kube-system
	5d95207d033fe       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   4cf88680e60f0       kube-controller-manager-old-k8s-version-936214   kube-system
	5020a5d587bcf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   fc2a2c26f7354       etcd-old-k8s-version-936214                      kube-system
	a7a0d39d07913       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   6c96efcb28fd0       kube-apiserver-old-k8s-version-936214            kube-system
	5b122d1f21fed       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   1c96b18ef54b3       kube-scheduler-old-k8s-version-936214            kube-system
	
	
	==> coredns [b68c2b76407ab4c86e2495c2e3caf81b64c76f567312f8a81ed45af93305cc6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39090 - 56842 "HINFO IN 2288069620349017123.7995776864820276937. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0253404s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-936214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-936214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-936214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_20_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-936214
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:21:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:21:26 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:21:26 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:21:26 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:21:26 +0000   Thu, 20 Nov 2025 21:21:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-936214
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                6cfc11cb-7b0f-45ce-af89-7b901c8d9e72
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-5t2cr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-936214                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-949k6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-936214             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-936214    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-z9sk2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-936214             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-936214 event: Registered Node old-k8s-version-936214 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-936214 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [5020a5d587bcf6334bdd4d29648ca6a86edfb2ce965c058957e4b390ec06b6ad] <==
	{"level":"warn","ts":"2025-11-20T21:21:09.252375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.828167Z","time spent":"423.921706ms","remote":"127.0.0.1:39040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":899,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:115 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:863 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >"}
	{"level":"warn","ts":"2025-11-20T21:21:09.252411Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.878707Z","time spent":"373.678783ms","remote":"127.0.0.1:38900","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5697,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/old-k8s-version-936214\" mod_revision:329 > success:<request_put:<key:\"/registry/minions/old-k8s-version-936214\" value_size:5649 >> failure:<request_range:<key:\"/registry/minions/old-k8s-version-936214\" > >"}
	{"level":"info","ts":"2025-11-20T21:21:09.252439Z","caller":"traceutil/trace.go:171","msg":"trace[97194311] linearizableReadLoop","detail":"{readStateIndex:356; appliedIndex:351; }","duration":"390.933883ms","start":"2025-11-20T21:21:08.861496Z","end":"2025-11-20T21:21:09.25243Z","steps":["trace[97194311] 'read index received'  (duration: 130.469046ms)","trace[97194311] 'applied index is now lower than readState.Index'  (duration: 260.463729ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:21:09.252493Z","caller":"traceutil/trace.go:171","msg":"trace[1730986272] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"424.175435ms","start":"2025-11-20T21:21:08.82831Z","end":"2025-11-20T21:21:09.252486Z","steps":["trace[1730986272] 'process raft request'  (duration: 423.682769ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.252529Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.828303Z","time spent":"424.204769ms","remote":"127.0.0.1:39040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2094,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:117 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:2059 >> failure:<request_range:<key:\"/registry/clusterroles/view\" > >"}
	{"level":"warn","ts":"2025-11-20T21:21:09.252615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.176956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-936214\" ","response":"range_response_count:1 size:5711"}
	{"level":"info","ts":"2025-11-20T21:21:09.252644Z","caller":"traceutil/trace.go:171","msg":"trace[1083893777] range","detail":"{range_begin:/registry/minions/old-k8s-version-936214; range_end:; response_count:1; response_revision:351; }","duration":"416.211172ms","start":"2025-11-20T21:21:08.836425Z","end":"2025-11-20T21:21:09.252636Z","steps":["trace[1083893777] 'agreement among raft nodes before linearized reading'  (duration: 416.127302ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.252652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.192895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-20T21:21:09.25267Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.836409Z","time spent":"416.254996ms","remote":"127.0.0.1:38900","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":5734,"request content":"key:\"/registry/minions/old-k8s-version-936214\" "}
	{"level":"warn","ts":"2025-11-20T21:21:09.252665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.173253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3193"}
	{"level":"info","ts":"2025-11-20T21:21:09.252681Z","caller":"traceutil/trace.go:171","msg":"trace[2001852716] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:351; }","duration":"289.211151ms","start":"2025-11-20T21:21:08.963454Z","end":"2025-11-20T21:21:09.252665Z","steps":["trace[2001852716] 'agreement among raft nodes before linearized reading'  (duration: 289.177364ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:09.252697Z","caller":"traceutil/trace.go:171","msg":"trace[164431396] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:351; }","duration":"416.214274ms","start":"2025-11-20T21:21:08.836474Z","end":"2025-11-20T21:21:09.252688Z","steps":["trace[164431396] 'agreement among raft nodes before linearized reading'  (duration: 416.103877ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.252721Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.836409Z","time spent":"416.305668ms","remote":"127.0.0.1:39136","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":3216,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"info","ts":"2025-11-20T21:21:09.252758Z","caller":"traceutil/trace.go:171","msg":"trace[1603367743] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"377.59632ms","start":"2025-11-20T21:21:08.875151Z","end":"2025-11-20T21:21:09.252747Z","steps":["trace[1603367743] 'process raft request'  (duration: 377.108163ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.252797Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.87514Z","time spent":"377.636176ms","remote":"127.0.0.1:38966","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1050,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-w6vhb\" mod_revision:0 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-w6vhb\" value_size:991 >> failure:<>"}
	{"level":"info","ts":"2025-11-20T21:21:09.252893Z","caller":"traceutil/trace.go:171","msg":"trace[607247482] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"378.956858ms","start":"2025-11-20T21:21:08.873925Z","end":"2025-11-20T21:21:09.252882Z","steps":["trace[607247482] 'process raft request'  (duration: 378.305921ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.25262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"385.007128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T21:21:09.252932Z","caller":"traceutil/trace.go:171","msg":"trace[1484143808] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:351; }","duration":"385.319782ms","start":"2025-11-20T21:21:08.867606Z","end":"2025-11-20T21:21:09.252926Z","steps":["trace[1484143808] 'agreement among raft nodes before linearized reading'  (duration: 384.993002ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:09.252932Z","caller":"traceutil/trace.go:171","msg":"trace[1367414006] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"377.610157ms","start":"2025-11-20T21:21:08.875304Z","end":"2025-11-20T21:21:09.252914Z","steps":["trace[1367414006] 'process raft request'  (duration: 376.979149ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.252947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.873912Z","time spent":"379.005756ms","remote":"127.0.0.1:38836","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1735,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" value_size:1678 >> failure:<>"}
	{"level":"warn","ts":"2025-11-20T21:21:09.252956Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.867594Z","time spent":"385.353274ms","remote":"127.0.0.1:39202","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-11-20T21:21:09.252969Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.875282Z","time spent":"377.666586ms","remote":"127.0.0.1:39064","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":977,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/storageclasses/standard\" mod_revision:0 > success:<request_put:<key:\"/registry/storageclasses/standard\" value_size:936 >> failure:<>"}
	{"level":"info","ts":"2025-11-20T21:21:09.252893Z","caller":"traceutil/trace.go:171","msg":"trace[1179597643] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"380.523989ms","start":"2025-11-20T21:21:08.87236Z","end":"2025-11-20T21:21:09.252884Z","steps":["trace[1179597643] 'process raft request'  (duration: 379.813776ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:09.253038Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-20T21:21:08.872309Z","time spent":"380.712012ms","remote":"127.0.0.1:39152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3620,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3560 >> failure:<>"}
	
	
	==> kernel <==
	 21:21:37 up  4:03,  0 user,  load average: 7.74, 5.11, 2.89
	Linux old-k8s-version-936214 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7150fb2ba61994d74fa1bb390b7fbfb2953767c3501d61d4c7d08a2a355841b2] <==
	I1120 21:21:12.392742       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:21:12.393316       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1120 21:21:12.393528       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:21:12.393546       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:21:12.393560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:21:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:21:12.597240       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:21:12.634381       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:21:12.634429       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:21:12.692506       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:21:13.092816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:21:13.092847       1 metrics.go:72] Registering metrics
	I1120 21:21:13.092912       1 controller.go:711] "Syncing nftables rules"
	I1120 21:21:22.605315       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:21:22.605389       1 main.go:301] handling current node
	I1120 21:21:32.599346       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:21:32.599379       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a7a0d39d07913d08c40ef7ba47acb71a72c855651b7625edf2a7728ea0505e2f] <==
	I1120 21:20:53.095891       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 21:20:53.096232       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 21:20:53.096352       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 21:20:53.096397       1 aggregator.go:166] initial CRD sync complete...
	I1120 21:20:53.096407       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 21:20:53.096416       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:20:53.096423       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:20:53.097159       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 21:20:53.109730       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:20:54.000537       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:20:54.004377       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:20:54.004395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:20:54.581110       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:20:54.624893       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:20:54.705842       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:20:54.711971       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1120 21:20:54.713278       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 21:20:54.718787       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:20:55.043449       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 21:20:55.980386       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 21:20:55.991943       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:20:56.003989       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1120 21:21:08.825364       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:21:08.825364       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:21:08.871765       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5d95207d033fe6e639c4cd7c57e41c7aa39e876d421ab821f5e268c1342340b6] <==
	I1120 21:21:07.992374       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1120 21:21:08.044536       1 shared_informer.go:318] Caches are synced for disruption
	I1120 21:21:08.067407       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:21:08.096555       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1120 21:21:08.105144       1 shared_informer.go:318] Caches are synced for deployment
	I1120 21:21:08.117810       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:21:08.439103       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:21:08.450605       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:21:08.450749       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 21:21:09.255900       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1120 21:21:09.326722       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-949k6"
	I1120 21:21:09.328715       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z9sk2"
	I1120 21:21:09.347718       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6xc5j"
	I1120 21:21:09.357300       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5t2cr"
	I1120 21:21:09.359887       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1120 21:21:09.385576       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.849631ms"
	I1120 21:21:09.397027       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6xc5j"
	I1120 21:21:09.404139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.490327ms"
	I1120 21:21:09.416165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.812778ms"
	I1120 21:21:09.416459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.754µs"
	I1120 21:21:23.125500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.351µs"
	I1120 21:21:23.137934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="307.203µs"
	I1120 21:21:24.190157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.353423ms"
	I1120 21:21:24.190403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.371µs"
	I1120 21:21:27.911807       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [182dbe340bc863a1bedc2eaeca010d4bf38f6f5825b753e9574716c8ea1113f9] <==
	I1120 21:21:09.743514       1 server_others.go:69] "Using iptables proxy"
	I1120 21:21:09.755209       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1120 21:21:09.775356       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:21:09.777859       1 server_others.go:152] "Using iptables Proxier"
	I1120 21:21:09.777898       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 21:21:09.777904       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 21:21:09.777935       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 21:21:09.778190       1 server.go:846] "Version info" version="v1.28.0"
	I1120 21:21:09.778208       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:21:09.780611       1 config.go:97] "Starting endpoint slice config controller"
	I1120 21:21:09.780688       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 21:21:09.780750       1 config.go:188] "Starting service config controller"
	I1120 21:21:09.780756       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 21:21:09.780655       1 config.go:315] "Starting node config controller"
	I1120 21:21:09.780768       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 21:21:09.881700       1 shared_informer.go:318] Caches are synced for node config
	I1120 21:21:09.881730       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 21:21:09.881742       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [5b122d1f21fed0f37daffb1856e173ae387a1273fc4e54b229352c29995fb9d1] <==
	E1120 21:20:53.067113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1120 21:20:53.066978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 21:20:53.067142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 21:20:53.067112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 21:20:53.880326       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1120 21:20:53.880365       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:20:53.930680       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 21:20:53.930711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 21:20:53.957350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1120 21:20:53.957390       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1120 21:20:54.007293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 21:20:54.007334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 21:20:54.011951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1120 21:20:54.011987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 21:20:54.041562       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 21:20:54.041599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1120 21:20:54.102210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 21:20:54.102287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1120 21:20:54.164948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1120 21:20:54.164992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 21:20:54.187425       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1120 21:20:54.187521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1120 21:20:54.339662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1120 21:20:54.339706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1120 21:20:55.762374       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 21:21:07 old-k8s-version-936214 kubelet[1389]: I1120 21:21:07.938500    1389 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:21:07 old-k8s-version-936214 kubelet[1389]: I1120 21:21:07.939259    1389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.343823    1389 topology_manager.go:215] "Topology Admit Handler" podUID="9bc52d10-b8b8-4805-ae1b-cbae97dc25ad" podNamespace="kube-system" podName="kube-proxy-z9sk2"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.348394    1389 topology_manager.go:215] "Topology Admit Handler" podUID="d19f5da9-8bc8-46f6-a8d5-25503820d80d" podNamespace="kube-system" podName="kindnet-949k6"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530310    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bc52d10-b8b8-4805-ae1b-cbae97dc25ad-xtables-lock\") pod \"kube-proxy-z9sk2\" (UID: \"9bc52d10-b8b8-4805-ae1b-cbae97dc25ad\") " pod="kube-system/kube-proxy-z9sk2"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530358    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9fl\" (UniqueName: \"kubernetes.io/projected/d19f5da9-8bc8-46f6-a8d5-25503820d80d-kube-api-access-tt9fl\") pod \"kindnet-949k6\" (UID: \"d19f5da9-8bc8-46f6-a8d5-25503820d80d\") " pod="kube-system/kindnet-949k6"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530385    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d19f5da9-8bc8-46f6-a8d5-25503820d80d-cni-cfg\") pod \"kindnet-949k6\" (UID: \"d19f5da9-8bc8-46f6-a8d5-25503820d80d\") " pod="kube-system/kindnet-949k6"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530416    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdr98\" (UniqueName: \"kubernetes.io/projected/9bc52d10-b8b8-4805-ae1b-cbae97dc25ad-kube-api-access-jdr98\") pod \"kube-proxy-z9sk2\" (UID: \"9bc52d10-b8b8-4805-ae1b-cbae97dc25ad\") " pod="kube-system/kube-proxy-z9sk2"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530444    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d19f5da9-8bc8-46f6-a8d5-25503820d80d-xtables-lock\") pod \"kindnet-949k6\" (UID: \"d19f5da9-8bc8-46f6-a8d5-25503820d80d\") " pod="kube-system/kindnet-949k6"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530471    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bc52d10-b8b8-4805-ae1b-cbae97dc25ad-lib-modules\") pod \"kube-proxy-z9sk2\" (UID: \"9bc52d10-b8b8-4805-ae1b-cbae97dc25ad\") " pod="kube-system/kube-proxy-z9sk2"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530495    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d19f5da9-8bc8-46f6-a8d5-25503820d80d-lib-modules\") pod \"kindnet-949k6\" (UID: \"d19f5da9-8bc8-46f6-a8d5-25503820d80d\") " pod="kube-system/kindnet-949k6"
	Nov 20 21:21:09 old-k8s-version-936214 kubelet[1389]: I1120 21:21:09.530529    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9bc52d10-b8b8-4805-ae1b-cbae97dc25ad-kube-proxy\") pod \"kube-proxy-z9sk2\" (UID: \"9bc52d10-b8b8-4805-ae1b-cbae97dc25ad\") " pod="kube-system/kube-proxy-z9sk2"
	Nov 20 21:21:10 old-k8s-version-936214 kubelet[1389]: I1120 21:21:10.136863    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z9sk2" podStartSLOduration=1.136804087 podCreationTimestamp="2025-11-20 21:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:10.136437325 +0000 UTC m=+14.181034576" watchObservedRunningTime="2025-11-20 21:21:10.136804087 +0000 UTC m=+14.181401341"
	Nov 20 21:21:13 old-k8s-version-936214 kubelet[1389]: I1120 21:21:13.143551    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-949k6" podStartSLOduration=1.713819745 podCreationTimestamp="2025-11-20 21:21:09 +0000 UTC" firstStartedPulling="2025-11-20 21:21:09.659098718 +0000 UTC m=+13.703695954" lastFinishedPulling="2025-11-20 21:21:12.088775139 +0000 UTC m=+16.133372376" observedRunningTime="2025-11-20 21:21:13.143426399 +0000 UTC m=+17.188023651" watchObservedRunningTime="2025-11-20 21:21:13.143496167 +0000 UTC m=+17.188093418"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.100406    1389 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.126047    1389 topology_manager.go:215] "Topology Admit Handler" podUID="3f5376b3-6d7d-4564-9dc0-d27a0882903a" podNamespace="kube-system" podName="coredns-5dd5756b68-5t2cr"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.127590    1389 topology_manager.go:215] "Topology Admit Handler" podUID="cf765557-656c-4944-bd0d-2cd578d3e885" podNamespace="kube-system" podName="storage-provisioner"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.231689    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f5376b3-6d7d-4564-9dc0-d27a0882903a-config-volume\") pod \"coredns-5dd5756b68-5t2cr\" (UID: \"3f5376b3-6d7d-4564-9dc0-d27a0882903a\") " pod="kube-system/coredns-5dd5756b68-5t2cr"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.231750    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf765557-656c-4944-bd0d-2cd578d3e885-tmp\") pod \"storage-provisioner\" (UID: \"cf765557-656c-4944-bd0d-2cd578d3e885\") " pod="kube-system/storage-provisioner"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.231789    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6nj\" (UniqueName: \"kubernetes.io/projected/cf765557-656c-4944-bd0d-2cd578d3e885-kube-api-access-4s6nj\") pod \"storage-provisioner\" (UID: \"cf765557-656c-4944-bd0d-2cd578d3e885\") " pod="kube-system/storage-provisioner"
	Nov 20 21:21:23 old-k8s-version-936214 kubelet[1389]: I1120 21:21:23.231862    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r2jc\" (UniqueName: \"kubernetes.io/projected/3f5376b3-6d7d-4564-9dc0-d27a0882903a-kube-api-access-6r2jc\") pod \"coredns-5dd5756b68-5t2cr\" (UID: \"3f5376b3-6d7d-4564-9dc0-d27a0882903a\") " pod="kube-system/coredns-5dd5756b68-5t2cr"
	Nov 20 21:21:24 old-k8s-version-936214 kubelet[1389]: I1120 21:21:24.169838    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.169788632 podCreationTimestamp="2025-11-20 21:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:24.169701345 +0000 UTC m=+28.214298599" watchObservedRunningTime="2025-11-20 21:21:24.169788632 +0000 UTC m=+28.214385884"
	Nov 20 21:21:24 old-k8s-version-936214 kubelet[1389]: I1120 21:21:24.179885    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5t2cr" podStartSLOduration=15.17983347 podCreationTimestamp="2025-11-20 21:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:24.17973482 +0000 UTC m=+28.224332074" watchObservedRunningTime="2025-11-20 21:21:24.17983347 +0000 UTC m=+28.224430722"
	Nov 20 21:21:26 old-k8s-version-936214 kubelet[1389]: I1120 21:21:26.190650    1389 topology_manager.go:215] "Topology Admit Handler" podUID="1b53bd6f-5850-4bee-9c34-0ebd759fa96b" podNamespace="default" podName="busybox"
	Nov 20 21:21:26 old-k8s-version-936214 kubelet[1389]: I1120 21:21:26.249889    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pv8j\" (UniqueName: \"kubernetes.io/projected/1b53bd6f-5850-4bee-9c34-0ebd759fa96b-kube-api-access-2pv8j\") pod \"busybox\" (UID: \"1b53bd6f-5850-4bee-9c34-0ebd759fa96b\") " pod="default/busybox"
	
	
	==> storage-provisioner [30d483cf3c948ca6e2ab5bc7f16359e09479713f6231429cc9d6e69cbda7709c] <==
	I1120 21:21:23.501312       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:21:23.519180       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:21:23.519257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 21:21:23.532740       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:21:23.532971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-936214_4b7fcff9-595c-4550-8c75-bdbc16bdcf17!
	I1120 21:21:23.535156       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e2486bc-d5c5-4ff5-8f75-9bef5c9224fc", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-936214_4b7fcff9-595c-4550-8c75-bdbc16bdcf17 became leader
	I1120 21:21:23.633665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-936214_4b7fcff9-595c-4550-8c75-bdbc16bdcf17!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-936214 -n old-k8s-version-936214
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-936214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.538916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:21:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-166874 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-166874 describe deploy/metrics-server -n kube-system: exit status 1 (74.361561ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-166874 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-166874
helpers_test.go:243: (dbg) docker inspect no-preload-166874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f",
	        "Created": "2025-11-20T21:20:53.087247999Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 535343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:20:53.142288013Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/hostname",
	        "HostsPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/hosts",
	        "LogPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f-json.log",
	        "Name": "/no-preload-166874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-166874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-166874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f",
	                "LowerDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-166874",
	                "Source": "/var/lib/docker/volumes/no-preload-166874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-166874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-166874",
	                "name.minikube.sigs.k8s.io": "no-preload-166874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0df5c3002fdd4205c75efbd28b9ab4638028897165c56fb4f98fb27621ddff12",
	            "SandboxKey": "/var/run/docker/netns/0df5c3002fdd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-166874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6bf71dac4c7dbfe0cbfa1577ea48c4b78277a2aaefe1bc1e081bb5b02ff78f81",
	                    "EndpointID": "66089ad03a1a3ef07c0b7c57628fd49c83561c407163a76f8f4644f11e1710b9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:38:bf:33:42:51",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-166874",
	                        "745a5057ecd0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-166874 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-166874 logs -n 25: (1.145658267s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat docker --no-pager                                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/docker/daemon.json                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo docker system info                                                                                                                                                                                              │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p old-k8s-version-936214 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:21:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:21:54.553588  555241 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:21:54.553746  555241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:54.553758  555241 out.go:374] Setting ErrFile to fd 2...
	I1120 21:21:54.553764  555241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:54.554000  555241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:21:54.554511  555241 out.go:368] Setting JSON to false
	I1120 21:21:54.555812  555241 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14657,"bootTime":1763659058,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:21:54.555922  555241 start.go:143] virtualization: kvm guest
	I1120 21:21:54.558245  555241 out.go:179] * [old-k8s-version-936214] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:21:54.559604  555241 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:21:54.559618  555241 notify.go:221] Checking for updates...
	I1120 21:21:54.562377  555241 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:21:54.563610  555241 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:21:54.564815  555241 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:21:54.566134  555241 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:21:54.567428  555241 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:21:54.568985  555241 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:21:54.570642  555241 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 21:21:54.571723  555241 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:21:54.601002  555241 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:21:54.601106  555241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:21:54.672614  555241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:21:54.658315121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:21:54.672742  555241 docker.go:319] overlay module found
	I1120 21:21:54.675091  555241 out.go:179] * Using the docker driver based on existing profile
	I1120 21:21:54.676461  555241 start.go:309] selected driver: docker
	I1120 21:21:54.676481  555241 start.go:930] validating driver "docker" against &{Name:old-k8s-version-936214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:54.676593  555241 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:21:54.677294  555241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:21:54.758174  555241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:21:54.742972312 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:21:54.758619  555241 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:21:54.758672  555241 cni.go:84] Creating CNI manager for ""
	I1120 21:21:54.758743  555241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:21:54.758800  555241 start.go:353] cluster config:
	{Name:old-k8s-version-936214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:54.762052  555241 out.go:179] * Starting "old-k8s-version-936214" primary control-plane node in "old-k8s-version-936214" cluster
	I1120 21:21:54.763894  555241 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:21:54.766252  555241 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:21:54.767756  555241 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 21:21:54.767993  555241 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1120 21:21:54.768021  555241 cache.go:65] Caching tarball of preloaded images
	I1120 21:21:54.768147  555241 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:21:54.768164  555241 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1120 21:21:54.768192  555241 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:21:54.768383  555241 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/config.json ...
	I1120 21:21:54.796979  555241 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:21:54.797035  555241 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:21:54.797052  555241 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:21:54.797090  555241 start.go:360] acquireMachinesLock for old-k8s-version-936214: {Name:mkfa161e7f3757034713c6013c89f32db01373db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:21:54.797258  555241 start.go:364] duration metric: took 90.697µs to acquireMachinesLock for "old-k8s-version-936214"
	I1120 21:21:54.797285  555241 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:21:54.797292  555241 fix.go:54] fixHost starting: 
	I1120 21:21:54.797661  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:21:54.817547  555241 fix.go:112] recreateIfNeeded on old-k8s-version-936214: state=Stopped err=<nil>
	W1120 21:21:54.817613  555241 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:21:54.248885  552911 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-454524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:21:54.268479  552911 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:21:54.273005  552911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:21:54.285259  552911 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:21:54.285438  552911 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:21:54.285502  552911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:21:54.324554  552911 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:21:54.324592  552911 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:21:54.324655  552911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:21:54.357033  552911 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:21:54.357062  552911 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:21:54.357072  552911 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1120 21:21:54.357202  552911 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-454524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:21:54.357310  552911 ssh_runner.go:195] Run: crio config
	I1120 21:21:54.410548  552911 cni.go:84] Creating CNI manager for ""
	I1120 21:21:54.410568  552911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:21:54.410586  552911 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:21:54.410613  552911 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-454524 NodeName:default-k8s-diff-port-454524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:21:54.410766  552911 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-454524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:21:54.410836  552911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:21:54.420288  552911 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:21:54.420358  552911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:21:54.429446  552911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1120 21:21:54.444770  552911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:21:54.463288  552911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1120 21:21:54.478129  552911 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:21:54.482785  552911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:21:54.495377  552911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:21:54.598420  552911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:21:54.630060  552911 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524 for IP: 192.168.85.2
	I1120 21:21:54.630089  552911 certs.go:195] generating shared ca certs ...
	I1120 21:21:54.630112  552911 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:54.630302  552911 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:21:54.630371  552911 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:21:54.630387  552911 certs.go:257] generating profile certs ...
	I1120 21:21:54.630468  552911 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.key
	I1120 21:21:54.630600  552911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.crt with IP's: []
	I1120 21:21:54.712179  552911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.crt ...
	I1120 21:21:54.712211  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.crt: {Name:mk3d98c70bd25799f85ae7ed1a857e2173181eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:54.712459  552911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.key ...
	I1120 21:21:54.712484  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.key: {Name:mk402653303a22fa60081c20f8184971901365ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:54.712641  552911 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9
	I1120 21:21:54.712662  552911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 21:21:54.433402  545013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:54.933453  545013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:55.030343  545013 kubeadm.go:1114] duration metric: took 4.187525484s to wait for elevateKubeSystemPrivileges
	I1120 21:21:55.030380  545013 kubeadm.go:403] duration metric: took 14.827786955s to StartCluster
	I1120 21:21:55.030402  545013 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.030466  545013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:21:55.032110  545013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.032396  545013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:21:55.032396  545013 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:21:55.032498  545013 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:21:55.032592  545013 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:55.032604  545013 addons.go:70] Setting default-storageclass=true in profile "embed-certs-714571"
	I1120 21:21:55.032622  545013 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-714571"
	I1120 21:21:55.032596  545013 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-714571"
	I1120 21:21:55.032682  545013 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-714571"
	I1120 21:21:55.032714  545013 host.go:66] Checking if "embed-certs-714571" exists ...
	I1120 21:21:55.033026  545013 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:21:55.033181  545013 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:21:55.035724  545013 out.go:179] * Verifying Kubernetes components...
	I1120 21:21:55.037042  545013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:21:55.067501  545013 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:21:55.067855  545013 addons.go:239] Setting addon default-storageclass=true in "embed-certs-714571"
	I1120 21:21:55.067971  545013 host.go:66] Checking if "embed-certs-714571" exists ...
	I1120 21:21:55.068589  545013 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:21:55.069249  545013 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:21:55.069271  545013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:21:55.069324  545013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-714571
	I1120 21:21:55.109451  545013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/embed-certs-714571/id_rsa Username:docker}
	I1120 21:21:55.117703  545013 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:21:55.117752  545013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:21:55.117841  545013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-714571
	I1120 21:21:55.149079  545013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/embed-certs-714571/id_rsa Username:docker}
	I1120 21:21:55.179500  545013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:21:55.252245  545013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:21:55.271778  545013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:21:55.315547  545013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:21:55.494812  545013 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 21:21:55.496904  545013 node_ready.go:35] waiting up to 6m0s for node "embed-certs-714571" to be "Ready" ...
	I1120 21:21:55.732812  545013 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:21:55.734627  545013 addons.go:515] duration metric: took 702.116993ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:21:55.999864  545013 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-714571" context rescaled to 1 replicas
	W1120 21:21:57.500346  545013 node_ready.go:57] node "embed-certs-714571" has "Ready":"False" status (will retry)
	I1120 21:21:54.819263  555241 out.go:252] * Restarting existing docker container for "old-k8s-version-936214" ...
	I1120 21:21:54.819356  555241 cli_runner.go:164] Run: docker start old-k8s-version-936214
	I1120 21:21:55.244167  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:21:55.281159  555241 kic.go:430] container "old-k8s-version-936214" state is running.
	I1120 21:21:55.282550  555241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-936214
	I1120 21:21:55.320411  555241 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/config.json ...
	I1120 21:21:55.320717  555241 machine.go:94] provisionDockerMachine start ...
	I1120 21:21:55.320808  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:55.358589  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:55.358988  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:55.359031  555241 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:21:55.359930  555241 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54594->127.0.0.1:33108: read: connection reset by peer
	I1120 21:21:58.499142  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-936214
	
	I1120 21:21:58.499199  555241 ubuntu.go:182] provisioning hostname "old-k8s-version-936214"
	I1120 21:21:58.499316  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:58.519571  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:58.519868  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:58.519890  555241 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-936214 && echo "old-k8s-version-936214" | sudo tee /etc/hostname
	I1120 21:21:58.666499  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-936214
	
	I1120 21:21:58.666605  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:58.686915  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:58.687147  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:58.687165  555241 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-936214' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-936214/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-936214' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:21:58.822444  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:21:58.822479  555241 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:21:58.822533  555241 ubuntu.go:190] setting up certificates
	I1120 21:21:58.822547  555241 provision.go:84] configureAuth start
	I1120 21:21:58.822622  555241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-936214
	I1120 21:21:58.841961  555241 provision.go:143] copyHostCerts
	I1120 21:21:58.842050  555241 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:21:58.842073  555241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:21:58.842168  555241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:21:58.842339  555241 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:21:58.842355  555241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:21:58.842392  555241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:21:58.842471  555241 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:21:58.842482  555241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:21:58.842516  555241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:21:58.842589  555241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-936214 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-936214]
	I1120 21:21:58.996531  555241 provision.go:177] copyRemoteCerts
	I1120 21:21:58.996607  555241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:21:58.996659  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.016004  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:21:59.114756  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:21:59.136505  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:21:59.157513  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 21:21:59.178867  555241 provision.go:87] duration metric: took 356.29936ms to configureAuth
	I1120 21:21:59.178903  555241 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:21:59.179125  555241 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:21:59.179293  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.199773  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:59.200111  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:59.200136  555241 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:21:59.548810  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:21:59.548843  555241 machine.go:97] duration metric: took 4.228100029s to provisionDockerMachine
	I1120 21:21:59.548858  555241 start.go:293] postStartSetup for "old-k8s-version-936214" (driver="docker")
	I1120 21:21:59.548872  555241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:21:59.549018  555241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:21:59.549074  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:55.109828  552911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9 ...
	I1120 21:21:55.109870  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9: {Name:mkfdb2b4d71c68fef0215c1daf879c3722fe5565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.111051  552911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9 ...
	I1120 21:21:55.111130  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9: {Name:mk45c2f0985af3a723bec9c6a67c4720324ba6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.111424  552911 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt
	I1120 21:21:55.111594  552911 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key
	I1120 21:21:55.111722  552911 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key
	I1120 21:21:55.111768  552911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt with IP's: []
	I1120 21:21:55.437537  552911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt ...
	I1120 21:21:55.437579  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt: {Name:mke4c3c3a0e2db2a3c44c52d3fcba5e8c036ede8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.437839  552911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key ...
	I1120 21:21:55.437869  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key: {Name:mkc9e0c5b7fcd5882214e2ef2d6beb9b3938ee0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.438187  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:21:55.438264  552911 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:21:55.438281  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:21:55.438310  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:21:55.438354  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:21:55.438388  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:21:55.438451  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:21:55.439378  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:21:55.475526  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:21:55.503723  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:21:55.530739  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:21:55.558714  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:21:55.588195  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:21:55.612911  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:21:55.636556  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:21:55.673448  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:21:55.699724  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:21:55.724767  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:21:55.745645  552911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:21:55.760505  552911 ssh_runner.go:195] Run: openssl version
	I1120 21:21:55.767381  552911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.775968  552911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:21:55.784376  552911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.788757  552911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.788818  552911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.826571  552911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:21:55.835689  552911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:21:55.844752  552911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.853116  552911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:21:55.861830  552911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.865827  552911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.865976  552911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.906919  552911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:21:55.915552  552911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/254094.pem /etc/ssl/certs/51391683.0
	I1120 21:21:55.924027  552911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.931887  552911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:21:55.939687  552911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.943947  552911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.944000  552911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.980287  552911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:21:55.989513  552911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2540942.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:21:55.998075  552911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:21:56.002695  552911 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:21:56.002748  552911 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:56.002822  552911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:21:56.002876  552911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:21:56.032426  552911 cri.go:89] found id: ""
	I1120 21:21:56.032516  552911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:21:56.041726  552911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:21:56.050607  552911 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:21:56.050674  552911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:21:56.059286  552911 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:21:56.059318  552911 kubeadm.go:158] found existing configuration files:
	
	I1120 21:21:56.059367  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1120 21:21:56.067851  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:21:56.067906  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:21:56.076272  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1120 21:21:56.084929  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:21:56.084992  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:21:56.094198  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1120 21:21:56.102708  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:21:56.102762  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:21:56.111192  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1120 21:21:56.120473  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:21:56.120534  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:21:56.129849  552911 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:21:56.192943  552911 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 21:21:56.256339  552911 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 20 21:21:46 no-preload-166874 crio[768]: time="2025-11-20T21:21:46.928271339Z" level=info msg="Starting container: 8e5b1ba9f90b6525db24eb467c89152b5d311edda9653fe4439a9049cc06334d" id=1f5667c2-a6cc-44a1-9c11-e81704e8bbbb name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:21:46 no-preload-166874 crio[768]: time="2025-11-20T21:21:46.930510298Z" level=info msg="Started container" PID=2918 containerID=8e5b1ba9f90b6525db24eb467c89152b5d311edda9653fe4439a9049cc06334d description=kube-system/coredns-66bc5c9577-knwbq/coredns id=1f5667c2-a6cc-44a1-9c11-e81704e8bbbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffa5256fb4b2efdcd67e4abc198a0803921e5619da83cbfdc79e2ad9c5fc5d47
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.494709306Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ed69033e-265c-4a9f-a986-f53be17fe181 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.494827353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.50111234Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cac9d621c69f93bbc661900c50eece075b68c57b9f94b504a9ffc8100af88b7d UID:2648763c-5822-494b-91d6-789fd9fa6909 NetNS:/var/run/netns/7bc9eace-179b-4cb7-9af5-77f62fbdeae2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00094c3a8}] Aliases:map[]}"
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.501160158Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.515000018Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cac9d621c69f93bbc661900c50eece075b68c57b9f94b504a9ffc8100af88b7d UID:2648763c-5822-494b-91d6-789fd9fa6909 NetNS:/var/run/netns/7bc9eace-179b-4cb7-9af5-77f62fbdeae2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00094c3a8}] Aliases:map[]}"
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.515249212Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.516801636Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.517894332Z" level=info msg="Ran pod sandbox cac9d621c69f93bbc661900c50eece075b68c57b9f94b504a9ffc8100af88b7d with infra container: default/busybox/POD" id=ed69033e-265c-4a9f-a986-f53be17fe181 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.519856906Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c333a2f9-52f2-4584-97a1-301d02154197 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.520027925Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c333a2f9-52f2-4584-97a1-301d02154197 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.520084166Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c333a2f9-52f2-4584-97a1-301d02154197 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.520756004Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e73c647-ab81-4c48-bc24-b647d9e2710a name=/runtime.v1.ImageService/PullImage
	Nov 20 21:21:50 no-preload-166874 crio[768]: time="2025-11-20T21:21:50.526836369Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.035612875Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9e73c647-ab81-4c48-bc24-b647d9e2710a name=/runtime.v1.ImageService/PullImage
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.036180622Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30fcc8cf-0f79-4aab-98ca-7e4f71e34b2f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.038122505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eedeb6a3-afae-4016-b12d-ae6ebba6be9c name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.042130478Z" level=info msg="Creating container: default/busybox/busybox" id=bd44c5bf-5dba-4651-9720-65a63dcfba03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.042266405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.046643063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.047087807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.09510626Z" level=info msg="Created container b068599a1a33cdf568c6e977fa60f13389dec7d373faba7e68b9a7490d8013e4: default/busybox/busybox" id=bd44c5bf-5dba-4651-9720-65a63dcfba03 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.09582574Z" level=info msg="Starting container: b068599a1a33cdf568c6e977fa60f13389dec7d373faba7e68b9a7490d8013e4" id=13d74206-40fd-4086-863d-ec841ff342c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:21:53 no-preload-166874 crio[768]: time="2025-11-20T21:21:53.097638853Z" level=info msg="Started container" PID=2991 containerID=b068599a1a33cdf568c6e977fa60f13389dec7d373faba7e68b9a7490d8013e4 description=default/busybox/busybox id=13d74206-40fd-4086-863d-ec841ff342c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cac9d621c69f93bbc661900c50eece075b68c57b9f94b504a9ffc8100af88b7d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b068599a1a33c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   cac9d621c69f9       busybox                                     default
	8e5b1ba9f90b6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   ffa5256fb4b2e       coredns-66bc5c9577-knwbq                    kube-system
	658544e4b6d9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   8bcd90ed5d4ec       storage-provisioner                         kube-system
	0002ac18a6e2d       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   7f8cf181a990f       kindnet-w6hk4                               kube-system
	0fc32caf7c6d4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      28 seconds ago      Running             kube-proxy                0                   f38e17396d0b7       kube-proxy-8mtnk                            kube-system
	5b7af8597addb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      39 seconds ago      Running             kube-scheduler            0                   16d266e51b740       kube-scheduler-no-preload-166874            kube-system
	9b553690f8737       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      39 seconds ago      Running             etcd                      0                   5f099ec244d1a       etcd-no-preload-166874                      kube-system
	bb1030a439fa4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      39 seconds ago      Running             kube-controller-manager   0                   b6b76dcff7947       kube-controller-manager-no-preload-166874   kube-system
	dab4ad9ae4099       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      39 seconds ago      Running             kube-apiserver            0                   a7c37941b16b3       kube-apiserver-no-preload-166874            kube-system
	
	
	==> coredns [8e5b1ba9f90b6525db24eb467c89152b5d311edda9653fe4439a9049cc06334d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34590 - 5991 "HINFO IN 3370018237173749871.4286500232123263629. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025320636s
	
	
	==> describe nodes <==
	Name:               no-preload-166874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-166874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-166874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_21_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-166874
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:21:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:21:56 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:21:56 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:21:56 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:21:56 +0000   Thu, 20 Nov 2025 21:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-166874
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                ad73315e-0ad1-465a-82ef-174a9e25f51f
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-knwbq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-166874                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-w6hk4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-166874             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-166874    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-8mtnk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-166874             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node no-preload-166874 event: Registered Node no-preload-166874 in Controller
	  Normal  NodeReady                14s                kubelet          Node no-preload-166874 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [9b553690f87373e14a503b7e368c05df2476e3f0ebeff5b9b13c7935b7e6ff9c] <==
	{"level":"warn","ts":"2025-11-20T21:21:23.012625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.019958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.031066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.038258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.046820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.077517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.084387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.090826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:21:23.143932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:21:32.716914Z","caller":"traceutil/trace.go:172","msg":"trace[1962983813] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"133.971252ms","start":"2025-11-20T21:21:32.582922Z","end":"2025-11-20T21:21:32.716893Z","steps":["trace[1962983813] 'process raft request'  (duration: 68.496521ms)","trace[1962983813] 'compare'  (duration: 65.369007ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:21:32.989964Z","caller":"traceutil/trace.go:172","msg":"trace[879501550] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:421; }","duration":"116.244704ms","start":"2025-11-20T21:21:32.873694Z","end":"2025-11-20T21:21:32.989939Z","steps":["trace[879501550] 'read index received'  (duration: 116.235831ms)","trace[879501550] 'applied index is now lower than readState.Index'  (duration: 7.919µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:32.994815Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.087741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-166874\" limit:1 ","response":"range_response_count:1 size:7513"}
	{"level":"info","ts":"2025-11-20T21:21:32.994907Z","caller":"traceutil/trace.go:172","msg":"trace[1959294915] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-no-preload-166874; range_end:; response_count:1; response_revision:409; }","duration":"121.221742ms","start":"2025-11-20T21:21:32.873677Z","end":"2025-11-20T21:21:32.994899Z","steps":["trace[1959294915] 'agreement among raft nodes before linearized reading'  (duration: 116.368706ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:32.994847Z","caller":"traceutil/trace.go:172","msg":"trace[1560714329] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"123.415969ms","start":"2025-11-20T21:21:32.871413Z","end":"2025-11-20T21:21:32.994829Z","steps":["trace[1560714329] 'process raft request'  (duration: 118.588709ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:33.190270Z","caller":"traceutil/trace.go:172","msg":"trace[3642821] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"186.700669ms","start":"2025-11-20T21:21:33.003546Z","end":"2025-11-20T21:21:33.190246Z","steps":["trace[3642821] 'process raft request'  (duration: 95.24467ms)","trace[3642821] 'compare'  (duration: 91.292399ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:33.620820Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.997922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-166874\" limit:1 ","response":"range_response_count:1 size:7513"}
	{"level":"info","ts":"2025-11-20T21:21:33.621675Z","caller":"traceutil/trace.go:172","msg":"trace[1084634503] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-no-preload-166874; range_end:; response_count:1; response_revision:415; }","duration":"144.856098ms","start":"2025-11-20T21:21:33.476792Z","end":"2025-11-20T21:21:33.621649Z","steps":["trace[1084634503] 'range keys from in-memory index tree'  (duration: 143.860814ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:33.621892Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.69701ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766306749501682 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:352 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-20T21:21:33.621961Z","caller":"traceutil/trace.go:172","msg":"trace[593491612] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"142.690359ms","start":"2025-11-20T21:21:33.479257Z","end":"2025-11-20T21:21:33.621948Z","steps":["trace[593491612] 'process raft request'  (duration: 15.222703ms)","trace[593491612] 'compare'  (duration: 126.167657ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:21:33.763419Z","caller":"traceutil/trace.go:172","msg":"trace[1671320150] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"128.774434ms","start":"2025-11-20T21:21:33.634626Z","end":"2025-11-20T21:21:33.763400Z","steps":["trace[1671320150] 'process raft request'  (duration: 128.668059ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:49.369731Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.70789ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T21:21:49.369861Z","caller":"traceutil/trace.go:172","msg":"trace[1131871598] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:456; }","duration":"117.852494ms","start":"2025-11-20T21:21:49.251992Z","end":"2025-11-20T21:21:49.369844Z","steps":["trace[1131871598] 'range keys from in-memory index tree'  (duration: 117.653752ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:49.369742Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.265204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-166874\" limit:1 ","response":"range_response_count:1 size:4803"}
	{"level":"info","ts":"2025-11-20T21:21:49.369954Z","caller":"traceutil/trace.go:172","msg":"trace[2028209012] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-166874; range_end:; response_count:1; response_revision:456; }","duration":"185.482911ms","start":"2025-11-20T21:21:49.184456Z","end":"2025-11-20T21:21:49.369939Z","steps":["trace[2028209012] 'range keys from in-memory index tree'  (duration: 185.093087ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:50.372345Z","caller":"traceutil/trace.go:172","msg":"trace[2028861049] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"172.933164ms","start":"2025-11-20T21:21:50.199390Z","end":"2025-11-20T21:21:50.372323Z","steps":["trace[2028861049] 'process raft request'  (duration: 140.525702ms)","trace[2028861049] 'compare'  (duration: 32.279419ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:22:00 up  4:04,  0 user,  load average: 6.37, 4.98, 2.90
	Linux no-preload-166874 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0002ac18a6e2d05e14862b9f2dc7c1a75b7e3bffa337d30c718f236626224a59] <==
	I1120 21:21:36.126295       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:21:36.126794       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1120 21:21:36.127025       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:21:36.127079       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:21:36.127133       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:21:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:21:36.425914       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:21:36.523590       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:21:36.523643       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:21:36.523934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:21:36.924064       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:21:36.924096       1 metrics.go:72] Registering metrics
	I1120 21:21:36.924188       1 controller.go:711] "Syncing nftables rules"
	I1120 21:21:46.429361       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:21:46.429416       1 main.go:301] handling current node
	I1120 21:21:56.426363       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:21:56.426401       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dab4ad9ae4099c48b0c3f2c94f84e572c238fdc1c564d893826924e9ef6f3b49] <==
	I1120 21:21:23.843026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:21:23.848570       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:21:23.848839       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:21:23.858450       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:21:23.858607       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:21:23.942714       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:21:24.646117       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:21:24.650131       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:21:24.650152       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:21:25.163816       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:21:25.200073       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:21:25.249592       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:21:25.255644       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1120 21:21:25.256603       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:21:25.260782       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:21:25.729881       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:21:26.507938       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:21:26.523844       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:21:26.534006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:21:31.464455       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:21:31.515028       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:21:31.515028       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:21:31.567581       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:21:31.575587       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1120 21:21:59.246811       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:34056: use of closed network connection
	
	
	==> kube-controller-manager [bb1030a439fa422672c2f148600f0af68898920da6c9ad9eddd6c335fd1f6def] <==
	I1120 21:21:30.711439       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:21:30.711534       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:21:30.711541       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:21:30.711623       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:21:30.711748       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:21:30.711778       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:21:30.711871       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:21:30.712975       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:21:30.713062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:21:30.713087       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:21:30.713156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:21:30.713168       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:21:30.713333       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:21:30.713682       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:21:30.715165       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:21:30.715333       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:21:30.715397       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:21:30.715439       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:21:30.715452       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:21:30.722533       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:21:30.724486       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-166874" podCIDRs=["10.244.0.0/24"]
	I1120 21:21:30.736712       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:21:30.743014       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:21:30.749376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:21:50.664601       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0fc32caf7c6d4e65121fcfe9c5a5395d5cfe1303bcf0110f840eb35ae754c29b] <==
	I1120 21:21:32.565413       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:21:32.638541       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:21:32.739311       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:21:32.739363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1120 21:21:32.739527       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:21:32.761561       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:21:32.761655       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:21:32.768088       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:21:32.768556       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:21:32.768630       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:21:32.770637       1 config.go:200] "Starting service config controller"
	I1120 21:21:32.770661       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:21:32.770681       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:21:32.770686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:21:32.770705       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:21:32.770710       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:21:32.770905       1 config.go:309] "Starting node config controller"
	I1120 21:21:32.770936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:21:32.770944       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:21:32.871579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:21:32.871597       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:21:32.871600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5b7af8597addb99c661127b4f3a95e45aefcdb2b5d9684fde6f3401176eb09a4] <==
	E1120 21:21:23.709554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:21:23.709551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:21:23.709584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:21:23.709686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:21:23.710687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:21:23.709944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:21:23.711769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:21:23.711897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:21:23.711989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:21:23.713646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:21:23.713813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:21:23.713957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:21:23.713964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:21:23.714016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:21:23.714173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:21:24.529358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:21:24.600937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:21:24.613757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:21:24.749144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:21:24.749260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:21:24.815038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:21:24.884142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:21:24.928907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:21:25.148260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 21:21:27.406112       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.595789    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b9f397e-01ab-4831-819f-0df8db892b7c-xtables-lock\") pod \"kube-proxy-8mtnk\" (UID: \"3b9f397e-01ab-4831-819f-0df8db892b7c\") " pod="kube-system/kube-proxy-8mtnk"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.596036    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/89c8ea21-6ae6-4fcc-b291-b1c32f999b92-cni-cfg\") pod \"kindnet-w6hk4\" (UID: \"89c8ea21-6ae6-4fcc-b291-b1c32f999b92\") " pod="kube-system/kindnet-w6hk4"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.596092    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c8ea21-6ae6-4fcc-b291-b1c32f999b92-xtables-lock\") pod \"kindnet-w6hk4\" (UID: \"89c8ea21-6ae6-4fcc-b291-b1c32f999b92\") " pod="kube-system/kindnet-w6hk4"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.596115    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c8ea21-6ae6-4fcc-b291-b1c32f999b92-lib-modules\") pod \"kindnet-w6hk4\" (UID: \"89c8ea21-6ae6-4fcc-b291-b1c32f999b92\") " pod="kube-system/kindnet-w6hk4"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.596372    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b9f397e-01ab-4831-819f-0df8db892b7c-kube-proxy\") pod \"kube-proxy-8mtnk\" (UID: \"3b9f397e-01ab-4831-819f-0df8db892b7c\") " pod="kube-system/kube-proxy-8mtnk"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.596424    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b9f397e-01ab-4831-819f-0df8db892b7c-lib-modules\") pod \"kube-proxy-8mtnk\" (UID: \"3b9f397e-01ab-4831-819f-0df8db892b7c\") " pod="kube-system/kube-proxy-8mtnk"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: I1120 21:21:31.596446    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gk6w\" (UniqueName: \"kubernetes.io/projected/3b9f397e-01ab-4831-819f-0df8db892b7c-kube-api-access-8gk6w\") pod \"kube-proxy-8mtnk\" (UID: \"3b9f397e-01ab-4831-819f-0df8db892b7c\") " pod="kube-system/kube-proxy-8mtnk"
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: E1120 21:21:31.705674    2313 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: E1120 21:21:31.705734    2313 projected.go:196] Error preparing data for projected volume kube-api-access-8gk6w for pod kube-system/kube-proxy-8mtnk: configmap "kube-root-ca.crt" not found
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: E1120 21:21:31.705739    2313 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: E1120 21:21:31.705768    2313 projected.go:196] Error preparing data for projected volume kube-api-access-96spk for pod kube-system/kindnet-w6hk4: configmap "kube-root-ca.crt" not found
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: E1120 21:21:31.705833    2313 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b9f397e-01ab-4831-819f-0df8db892b7c-kube-api-access-8gk6w podName:3b9f397e-01ab-4831-819f-0df8db892b7c nodeName:}" failed. No retries permitted until 2025-11-20 21:21:32.205802913 +0000 UTC m=+5.921418484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8gk6w" (UniqueName: "kubernetes.io/projected/3b9f397e-01ab-4831-819f-0df8db892b7c-kube-api-access-8gk6w") pod "kube-proxy-8mtnk" (UID: "3b9f397e-01ab-4831-819f-0df8db892b7c") : configmap "kube-root-ca.crt" not found
	Nov 20 21:21:31 no-preload-166874 kubelet[2313]: E1120 21:21:31.705854    2313 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89c8ea21-6ae6-4fcc-b291-b1c32f999b92-kube-api-access-96spk podName:89c8ea21-6ae6-4fcc-b291-b1c32f999b92 nodeName:}" failed. No retries permitted until 2025-11-20 21:21:32.205844831 +0000 UTC m=+5.921460387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-96spk" (UniqueName: "kubernetes.io/projected/89c8ea21-6ae6-4fcc-b291-b1c32f999b92-kube-api-access-96spk") pod "kindnet-w6hk4" (UID: "89c8ea21-6ae6-4fcc-b291-b1c32f999b92") : configmap "kube-root-ca.crt" not found
	Nov 20 21:21:33 no-preload-166874 kubelet[2313]: I1120 21:21:33.475410    2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mtnk" podStartSLOduration=2.475387851 podStartE2EDuration="2.475387851s" podCreationTimestamp="2025-11-20 21:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:33.475365878 +0000 UTC m=+7.190981452" watchObservedRunningTime="2025-11-20 21:21:33.475387851 +0000 UTC m=+7.191003425"
	Nov 20 21:21:39 no-preload-166874 kubelet[2313]: I1120 21:21:39.230926    2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w6hk4" podStartSLOduration=4.896273441 podStartE2EDuration="8.230902124s" podCreationTimestamp="2025-11-20 21:21:31 +0000 UTC" firstStartedPulling="2025-11-20 21:21:32.451541508 +0000 UTC m=+6.167157076" lastFinishedPulling="2025-11-20 21:21:35.786170205 +0000 UTC m=+9.501785759" observedRunningTime="2025-11-20 21:21:36.449307998 +0000 UTC m=+10.164923571" watchObservedRunningTime="2025-11-20 21:21:39.230902124 +0000 UTC m=+12.946517699"
	Nov 20 21:21:46 no-preload-166874 kubelet[2313]: I1120 21:21:46.527123    2313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:21:46 no-preload-166874 kubelet[2313]: I1120 21:21:46.607373    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvjn\" (UniqueName: \"kubernetes.io/projected/d362d663-578d-46ae-9fe2-b28ab1b00f5c-kube-api-access-ndvjn\") pod \"storage-provisioner\" (UID: \"d362d663-578d-46ae-9fe2-b28ab1b00f5c\") " pod="kube-system/storage-provisioner"
	Nov 20 21:21:46 no-preload-166874 kubelet[2313]: I1120 21:21:46.607443    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c4bc14b-7cfc-45bf-9b6f-521c533cfe32-config-volume\") pod \"coredns-66bc5c9577-knwbq\" (UID: \"5c4bc14b-7cfc-45bf-9b6f-521c533cfe32\") " pod="kube-system/coredns-66bc5c9577-knwbq"
	Nov 20 21:21:46 no-preload-166874 kubelet[2313]: I1120 21:21:46.607547    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfth\" (UniqueName: \"kubernetes.io/projected/5c4bc14b-7cfc-45bf-9b6f-521c533cfe32-kube-api-access-nkfth\") pod \"coredns-66bc5c9577-knwbq\" (UID: \"5c4bc14b-7cfc-45bf-9b6f-521c533cfe32\") " pod="kube-system/coredns-66bc5c9577-knwbq"
	Nov 20 21:21:46 no-preload-166874 kubelet[2313]: I1120 21:21:46.607630    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d362d663-578d-46ae-9fe2-b28ab1b00f5c-tmp\") pod \"storage-provisioner\" (UID: \"d362d663-578d-46ae-9fe2-b28ab1b00f5c\") " pod="kube-system/storage-provisioner"
	Nov 20 21:21:47 no-preload-166874 kubelet[2313]: I1120 21:21:47.476359    2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.476336025 podStartE2EDuration="14.476336025s" podCreationTimestamp="2025-11-20 21:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:47.476098689 +0000 UTC m=+21.191714263" watchObservedRunningTime="2025-11-20 21:21:47.476336025 +0000 UTC m=+21.191951599"
	Nov 20 21:21:47 no-preload-166874 kubelet[2313]: I1120 21:21:47.488701    2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-knwbq" podStartSLOduration=16.488678517 podStartE2EDuration="16.488678517s" podCreationTimestamp="2025-11-20 21:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:47.48865199 +0000 UTC m=+21.204267578" watchObservedRunningTime="2025-11-20 21:21:47.488678517 +0000 UTC m=+21.204294091"
	Nov 20 21:21:50 no-preload-166874 kubelet[2313]: I1120 21:21:50.232462    2313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjd8k\" (UniqueName: \"kubernetes.io/projected/2648763c-5822-494b-91d6-789fd9fa6909-kube-api-access-mjd8k\") pod \"busybox\" (UID: \"2648763c-5822-494b-91d6-789fd9fa6909\") " pod="default/busybox"
	Nov 20 21:21:53 no-preload-166874 kubelet[2313]: I1120 21:21:53.495178    2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.978148247 podStartE2EDuration="3.495156872s" podCreationTimestamp="2025-11-20 21:21:50 +0000 UTC" firstStartedPulling="2025-11-20 21:21:50.520337672 +0000 UTC m=+24.235953237" lastFinishedPulling="2025-11-20 21:21:53.037346296 +0000 UTC m=+26.752961862" observedRunningTime="2025-11-20 21:21:53.494856369 +0000 UTC m=+27.210471939" watchObservedRunningTime="2025-11-20 21:21:53.495156872 +0000 UTC m=+27.210772450"
	Nov 20 21:21:59 no-preload-166874 kubelet[2313]: E1120 21:21:59.246766    2313 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41088->127.0.0.1:43591: write tcp 127.0.0.1:41088->127.0.0.1:43591: write: broken pipe
	
	
	==> storage-provisioner [658544e4b6d9ae2a483746af616de70fb04089d66aff91a3576d172ad8c844c2] <==
	I1120 21:21:46.934075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:21:46.943987       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:21:46.944039       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:21:46.946413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:46.951820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:21:46.952045       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:21:46.952262       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-166874_d2142072-61e1-4d96-b75c-eac5aa490360!
	I1120 21:21:46.952264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c660846-32c5-44e8-899b-a6d3ef0e6368", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-166874_d2142072-61e1-4d96-b75c-eac5aa490360 became leader
	W1120 21:21:46.955174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:46.960472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:21:47.053073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-166874_d2142072-61e1-4d96-b75c-eac5aa490360!
	W1120 21:21:48.963355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:49.056433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:51.059775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:51.064713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:53.068381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:53.075130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:55.079179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:55.086853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:57.090371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:57.094124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:59.097605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:21:59.106569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:01.110465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:01.114920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-166874 -n no-preload-166874
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-166874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (314.728181ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-714571 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-714571 describe deploy/metrics-server -n kube-system: exit status 1 (79.869034ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-714571 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-714571
helpers_test.go:243: (dbg) docker inspect embed-certs-714571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240",
	        "Created": "2025-11-20T21:21:34.898715026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 547569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:21:35.077317508Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/hosts",
	        "LogPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240-json.log",
	        "Name": "/embed-certs-714571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-714571:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-714571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240",
	                "LowerDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-714571",
	                "Source": "/var/lib/docker/volumes/embed-certs-714571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-714571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-714571",
	                "name.minikube.sigs.k8s.io": "embed-certs-714571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b98f9a3a974d278a0658c89bd0eda85b0d799d38bb9a4701cbbe5c20f8d61c90",
	            "SandboxKey": "/var/run/docker/netns/b98f9a3a974d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-714571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ab433249a4ff0be5f1bb45e1da7b7dc47bc44c49beb110d4c515f5ebe9f33a4",
	                    "EndpointID": "c2d64ad3156bf0a48ab1338bf895e14144682c7110f36474501c49db8213c24a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ae:d5:7c:9a:34:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-714571",
	                        "ccf93eabab84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-714571 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-714571 logs -n 25: (1.434066698s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/docker/daemon.json                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo docker system info                                                                                                                                                                                              │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p old-k8s-version-936214 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:21:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:21:54.553588  555241 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:21:54.553746  555241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:54.553758  555241 out.go:374] Setting ErrFile to fd 2...
	I1120 21:21:54.553764  555241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:21:54.554000  555241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:21:54.554511  555241 out.go:368] Setting JSON to false
	I1120 21:21:54.555812  555241 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14657,"bootTime":1763659058,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:21:54.555922  555241 start.go:143] virtualization: kvm guest
	I1120 21:21:54.558245  555241 out.go:179] * [old-k8s-version-936214] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:21:54.559604  555241 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:21:54.559618  555241 notify.go:221] Checking for updates...
	I1120 21:21:54.562377  555241 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:21:54.563610  555241 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:21:54.564815  555241 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:21:54.566134  555241 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:21:54.567428  555241 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:21:54.568985  555241 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:21:54.570642  555241 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1120 21:21:54.571723  555241 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:21:54.601002  555241 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:21:54.601106  555241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:21:54.672614  555241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:21:54.658315121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:21:54.672742  555241 docker.go:319] overlay module found
	I1120 21:21:54.675091  555241 out.go:179] * Using the docker driver based on existing profile
	I1120 21:21:54.676461  555241 start.go:309] selected driver: docker
	I1120 21:21:54.676481  555241 start.go:930] validating driver "docker" against &{Name:old-k8s-version-936214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:54.676593  555241 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:21:54.677294  555241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:21:54.758174  555241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:21:54.742972312 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:21:54.758619  555241 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:21:54.758672  555241 cni.go:84] Creating CNI manager for ""
	I1120 21:21:54.758743  555241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:21:54.758800  555241 start.go:353] cluster config:
	{Name:old-k8s-version-936214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:54.762052  555241 out.go:179] * Starting "old-k8s-version-936214" primary control-plane node in "old-k8s-version-936214" cluster
	I1120 21:21:54.763894  555241 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:21:54.766252  555241 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:21:54.767756  555241 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 21:21:54.767993  555241 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1120 21:21:54.768021  555241 cache.go:65] Caching tarball of preloaded images
	I1120 21:21:54.768147  555241 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:21:54.768164  555241 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1120 21:21:54.768192  555241 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:21:54.768383  555241 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/config.json ...
	I1120 21:21:54.796979  555241 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:21:54.797035  555241 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:21:54.797052  555241 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:21:54.797090  555241 start.go:360] acquireMachinesLock for old-k8s-version-936214: {Name:mkfa161e7f3757034713c6013c89f32db01373db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:21:54.797258  555241 start.go:364] duration metric: took 90.697µs to acquireMachinesLock for "old-k8s-version-936214"
	I1120 21:21:54.797285  555241 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:21:54.797292  555241 fix.go:54] fixHost starting: 
	I1120 21:21:54.797661  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:21:54.817547  555241 fix.go:112] recreateIfNeeded on old-k8s-version-936214: state=Stopped err=<nil>
	W1120 21:21:54.817613  555241 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:21:54.248885  552911 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-454524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:21:54.268479  552911 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1120 21:21:54.273005  552911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:21:54.285259  552911 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:21:54.285438  552911 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:21:54.285502  552911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:21:54.324554  552911 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:21:54.324592  552911 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:21:54.324655  552911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:21:54.357033  552911 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:21:54.357062  552911 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:21:54.357072  552911 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1120 21:21:54.357202  552911 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-454524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:21:54.357310  552911 ssh_runner.go:195] Run: crio config
	I1120 21:21:54.410548  552911 cni.go:84] Creating CNI manager for ""
	I1120 21:21:54.410568  552911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:21:54.410586  552911 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:21:54.410613  552911 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-454524 NodeName:default-k8s-diff-port-454524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:21:54.410766  552911 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-454524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:21:54.410836  552911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:21:54.420288  552911 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:21:54.420358  552911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:21:54.429446  552911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1120 21:21:54.444770  552911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:21:54.463288  552911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1120 21:21:54.478129  552911 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:21:54.482785  552911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:21:54.495377  552911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:21:54.598420  552911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:21:54.630060  552911 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524 for IP: 192.168.85.2
	I1120 21:21:54.630089  552911 certs.go:195] generating shared ca certs ...
	I1120 21:21:54.630112  552911 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:54.630302  552911 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:21:54.630371  552911 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:21:54.630387  552911 certs.go:257] generating profile certs ...
	I1120 21:21:54.630468  552911 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.key
	I1120 21:21:54.630600  552911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.crt with IP's: []
	I1120 21:21:54.712179  552911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.crt ...
	I1120 21:21:54.712211  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.crt: {Name:mk3d98c70bd25799f85ae7ed1a857e2173181eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:54.712459  552911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.key ...
	I1120 21:21:54.712484  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/client.key: {Name:mk402653303a22fa60081c20f8184971901365ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:54.712641  552911 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9
	I1120 21:21:54.712662  552911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1120 21:21:54.433402  545013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:54.933453  545013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:21:55.030343  545013 kubeadm.go:1114] duration metric: took 4.187525484s to wait for elevateKubeSystemPrivileges
	I1120 21:21:55.030380  545013 kubeadm.go:403] duration metric: took 14.827786955s to StartCluster
	I1120 21:21:55.030402  545013 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.030466  545013 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:21:55.032110  545013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.032396  545013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:21:55.032396  545013 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:21:55.032498  545013 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:21:55.032592  545013 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:21:55.032604  545013 addons.go:70] Setting default-storageclass=true in profile "embed-certs-714571"
	I1120 21:21:55.032622  545013 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-714571"
	I1120 21:21:55.032596  545013 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-714571"
	I1120 21:21:55.032682  545013 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-714571"
	I1120 21:21:55.032714  545013 host.go:66] Checking if "embed-certs-714571" exists ...
	I1120 21:21:55.033026  545013 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:21:55.033181  545013 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:21:55.035724  545013 out.go:179] * Verifying Kubernetes components...
	I1120 21:21:55.037042  545013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:21:55.067501  545013 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:21:55.067855  545013 addons.go:239] Setting addon default-storageclass=true in "embed-certs-714571"
	I1120 21:21:55.067971  545013 host.go:66] Checking if "embed-certs-714571" exists ...
	I1120 21:21:55.068589  545013 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:21:55.069249  545013 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:21:55.069271  545013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:21:55.069324  545013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-714571
	I1120 21:21:55.109451  545013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/embed-certs-714571/id_rsa Username:docker}
	I1120 21:21:55.117703  545013 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:21:55.117752  545013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:21:55.117841  545013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-714571
	I1120 21:21:55.149079  545013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/embed-certs-714571/id_rsa Username:docker}
	I1120 21:21:55.179500  545013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:21:55.252245  545013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:21:55.271778  545013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:21:55.315547  545013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:21:55.494812  545013 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1120 21:21:55.496904  545013 node_ready.go:35] waiting up to 6m0s for node "embed-certs-714571" to be "Ready" ...
	I1120 21:21:55.732812  545013 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:21:55.734627  545013 addons.go:515] duration metric: took 702.116993ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:21:55.999864  545013 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-714571" context rescaled to 1 replicas
	W1120 21:21:57.500346  545013 node_ready.go:57] node "embed-certs-714571" has "Ready":"False" status (will retry)
	I1120 21:21:54.819263  555241 out.go:252] * Restarting existing docker container for "old-k8s-version-936214" ...
	I1120 21:21:54.819356  555241 cli_runner.go:164] Run: docker start old-k8s-version-936214
	I1120 21:21:55.244167  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:21:55.281159  555241 kic.go:430] container "old-k8s-version-936214" state is running.
	I1120 21:21:55.282550  555241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-936214
	I1120 21:21:55.320411  555241 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/config.json ...
	I1120 21:21:55.320717  555241 machine.go:94] provisionDockerMachine start ...
	I1120 21:21:55.320808  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:55.358589  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:55.358988  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:55.359031  555241 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:21:55.359930  555241 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54594->127.0.0.1:33108: read: connection reset by peer
	I1120 21:21:58.499142  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-936214
	
	I1120 21:21:58.499199  555241 ubuntu.go:182] provisioning hostname "old-k8s-version-936214"
	I1120 21:21:58.499316  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:58.519571  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:58.519868  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:58.519890  555241 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-936214 && echo "old-k8s-version-936214" | sudo tee /etc/hostname
	I1120 21:21:58.666499  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-936214
	
	I1120 21:21:58.666605  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:58.686915  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:58.687147  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:58.687165  555241 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-936214' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-936214/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-936214' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:21:58.822444  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:21:58.822479  555241 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:21:58.822533  555241 ubuntu.go:190] setting up certificates
	I1120 21:21:58.822547  555241 provision.go:84] configureAuth start
	I1120 21:21:58.822622  555241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-936214
	I1120 21:21:58.841961  555241 provision.go:143] copyHostCerts
	I1120 21:21:58.842050  555241 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:21:58.842073  555241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:21:58.842168  555241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:21:58.842339  555241 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:21:58.842355  555241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:21:58.842392  555241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:21:58.842471  555241 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:21:58.842482  555241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:21:58.842516  555241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:21:58.842589  555241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-936214 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-936214]
	I1120 21:21:58.996531  555241 provision.go:177] copyRemoteCerts
	I1120 21:21:58.996607  555241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:21:58.996659  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.016004  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:21:59.114756  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:21:59.136505  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:21:59.157513  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1120 21:21:59.178867  555241 provision.go:87] duration metric: took 356.29936ms to configureAuth
	I1120 21:21:59.178903  555241 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:21:59.179125  555241 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:21:59.179293  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.199773  555241 main.go:143] libmachine: Using SSH client type: native
	I1120 21:21:59.200111  555241 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1120 21:21:59.200136  555241 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:21:59.548810  555241 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:21:59.548843  555241 machine.go:97] duration metric: took 4.228100029s to provisionDockerMachine
	I1120 21:21:59.548858  555241 start.go:293] postStartSetup for "old-k8s-version-936214" (driver="docker")
	I1120 21:21:59.548872  555241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:21:59.549018  555241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:21:59.549074  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:55.109828  552911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9 ...
	I1120 21:21:55.109870  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9: {Name:mkfdb2b4d71c68fef0215c1daf879c3722fe5565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.111051  552911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9 ...
	I1120 21:21:55.111130  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9: {Name:mk45c2f0985af3a723bec9c6a67c4720324ba6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.111424  552911 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt.0d8644c9 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt
	I1120 21:21:55.111594  552911 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key.0d8644c9 -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key
	I1120 21:21:55.111722  552911 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key
	I1120 21:21:55.111768  552911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt with IP's: []
	I1120 21:21:55.437537  552911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt ...
	I1120 21:21:55.437579  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt: {Name:mke4c3c3a0e2db2a3c44c52d3fcba5e8c036ede8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.437839  552911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key ...
	I1120 21:21:55.437869  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key: {Name:mkc9e0c5b7fcd5882214e2ef2d6beb9b3938ee0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:21:55.438187  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:21:55.438264  552911 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:21:55.438281  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:21:55.438310  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:21:55.438354  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:21:55.438388  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:21:55.438451  552911 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:21:55.439378  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:21:55.475526  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:21:55.503723  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:21:55.530739  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:21:55.558714  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1120 21:21:55.588195  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:21:55.612911  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:21:55.636556  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:21:55.673448  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:21:55.699724  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:21:55.724767  552911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:21:55.745645  552911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:21:55.760505  552911 ssh_runner.go:195] Run: openssl version
	I1120 21:21:55.767381  552911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.775968  552911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:21:55.784376  552911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.788757  552911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.788818  552911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:21:55.826571  552911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:21:55.835689  552911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:21:55.844752  552911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.853116  552911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:21:55.861830  552911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.865827  552911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.865976  552911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:21:55.906919  552911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:21:55.915552  552911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/254094.pem /etc/ssl/certs/51391683.0
	I1120 21:21:55.924027  552911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.931887  552911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:21:55.939687  552911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.943947  552911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.944000  552911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:21:55.980287  552911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:21:55.989513  552911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2540942.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:21:55.998075  552911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:21:56.002695  552911 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:21:56.002748  552911 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:21:56.002822  552911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:21:56.002876  552911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:21:56.032426  552911 cri.go:89] found id: ""
	I1120 21:21:56.032516  552911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:21:56.041726  552911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:21:56.050607  552911 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:21:56.050674  552911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:21:56.059286  552911 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:21:56.059318  552911 kubeadm.go:158] found existing configuration files:
	
	I1120 21:21:56.059367  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1120 21:21:56.067851  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:21:56.067906  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:21:56.076272  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1120 21:21:56.084929  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:21:56.084992  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:21:56.094198  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1120 21:21:56.102708  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:21:56.102762  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:21:56.111192  552911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1120 21:21:56.120473  552911 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:21:56.120534  552911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:21:56.129849  552911 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:21:56.192943  552911 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 21:21:56.256339  552911 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1120 21:21:59.575099  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:21:59.680661  555241 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:21:59.685448  555241 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:21:59.685485  555241 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:21:59.685498  555241 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:21:59.685557  555241 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:21:59.685647  555241 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:21:59.685781  555241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:21:59.695276  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:21:59.716023  555241 start.go:296] duration metric: took 167.146558ms for postStartSetup
	I1120 21:21:59.716115  555241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:21:59.716161  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.737404  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:21:59.834161  555241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:21:59.840113  555241 fix.go:56] duration metric: took 5.042813438s for fixHost
	I1120 21:21:59.840148  555241 start.go:83] releasing machines lock for "old-k8s-version-936214", held for 5.04287359s
	I1120 21:21:59.840278  555241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-936214
	I1120 21:21:59.862433  555241 ssh_runner.go:195] Run: cat /version.json
	I1120 21:21:59.862499  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.862515  555241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:21:59.862579  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:21:59.884442  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:21:59.890595  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:21:59.986596  555241 ssh_runner.go:195] Run: systemctl --version
	I1120 21:22:00.058234  555241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:22:00.098973  555241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:22:00.105277  555241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:22:00.105354  555241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:22:00.114732  555241 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:22:00.114759  555241 start.go:496] detecting cgroup driver to use...
	I1120 21:22:00.114792  555241 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:22:00.114846  555241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:22:00.131891  555241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:22:00.149796  555241 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:22:00.149873  555241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:22:00.168449  555241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:22:00.183355  555241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:22:00.270983  555241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:22:00.365998  555241 docker.go:234] disabling docker service ...
	I1120 21:22:00.366067  555241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:22:00.383212  555241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:22:00.396533  555241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:22:00.494261  555241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:22:00.591685  555241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:22:00.606894  555241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:22:00.624153  555241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1120 21:22:00.624212  555241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.635152  555241 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:22:00.635246  555241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.645497  555241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.655157  555241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.665244  555241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:22:00.675733  555241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.686193  555241 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.696350  555241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:00.708739  555241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:22:00.718614  555241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:22:00.727803  555241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:00.831099  555241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:22:00.994187  555241 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:22:00.994279  555241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:22:00.999625  555241 start.go:564] Will wait 60s for crictl version
	I1120 21:22:00.999701  555241 ssh_runner.go:195] Run: which crictl
	I1120 21:22:01.004654  555241 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:22:01.035010  555241 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:22:01.035105  555241 ssh_runner.go:195] Run: crio --version
	I1120 21:22:01.072024  555241 ssh_runner.go:195] Run: crio --version
	I1120 21:22:01.108689  555241 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1120 21:22:01.109916  555241 cli_runner.go:164] Run: docker network inspect old-k8s-version-936214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:22:01.130336  555241 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:22:01.135619  555241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:22:01.147099  555241 kubeadm.go:884] updating cluster {Name:old-k8s-version-936214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:22:01.147245  555241 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 21:22:01.147315  555241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:22:01.186546  555241 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:22:01.186569  555241 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:22:01.186617  555241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:22:01.214131  555241 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:22:01.214155  555241 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:22:01.214165  555241 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1120 21:22:01.214298  555241 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-936214 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:22:01.214377  555241 ssh_runner.go:195] Run: crio config
	I1120 21:22:01.276013  555241 cni.go:84] Creating CNI manager for ""
	I1120 21:22:01.276042  555241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:22:01.276065  555241 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:22:01.276095  555241 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-936214 NodeName:old-k8s-version-936214 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:22:01.276298  555241 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-936214"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:22:01.276374  555241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1120 21:22:01.287086  555241 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:22:01.287155  555241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:22:01.296046  555241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1120 21:22:01.311840  555241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:22:01.325347  555241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1120 21:22:01.339715  555241 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:22:01.343737  555241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:22:01.354911  555241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:01.453408  555241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:22:01.478657  555241 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214 for IP: 192.168.103.2
	I1120 21:22:01.478681  555241 certs.go:195] generating shared ca certs ...
	I1120 21:22:01.478774  555241 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:01.479030  555241 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:22:01.479093  555241 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:22:01.479103  555241 certs.go:257] generating profile certs ...
	I1120 21:22:01.479212  555241 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/client.key
	I1120 21:22:01.479372  555241 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/apiserver.key.61d99856
	I1120 21:22:01.479503  555241 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/proxy-client.key
	I1120 21:22:01.479754  555241 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:22:01.479848  555241 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:22:01.479867  555241 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:22:01.479964  555241 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:22:01.480034  555241 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:22:01.480107  555241 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:22:01.480268  555241 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:22:01.481344  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:22:01.507257  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:22:01.530441  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:22:01.553860  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:22:01.579178  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1120 21:22:01.608158  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1120 21:22:01.630012  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:22:01.651078  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/old-k8s-version-936214/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:22:01.675828  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:22:01.701280  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:22:01.721793  555241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:22:01.741918  555241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:22:01.758102  555241 ssh_runner.go:195] Run: openssl version
	I1120 21:22:01.766179  555241 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:22:01.774433  555241 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:22:01.783784  555241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:22:01.788004  555241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:22:01.788069  555241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:22:01.827433  555241 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:22:01.835968  555241 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:22:01.844478  555241 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:22:01.852681  555241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:22:01.856944  555241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:22:01.857007  555241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:22:01.902738  555241 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:22:01.912755  555241 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:01.926366  555241 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:22:01.939858  555241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:01.946631  555241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:01.946783  555241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:01.993523  555241 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:22:02.003203  555241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:22:02.007857  555241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:22:02.055812  555241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:22:02.099355  555241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:22:02.154304  555241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:22:02.214066  555241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:22:02.268274  555241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:22:02.333850  555241 kubeadm.go:401] StartCluster: {Name:old-k8s-version-936214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-936214 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:02.333988  555241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:22:02.334067  555241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:22:02.372284  555241 cri.go:89] found id: "0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5"
	I1120 21:22:02.372316  555241 cri.go:89] found id: "4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997"
	I1120 21:22:02.372322  555241 cri.go:89] found id: "1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d"
	I1120 21:22:02.372326  555241 cri.go:89] found id: "7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392"
	I1120 21:22:02.372335  555241 cri.go:89] found id: ""
	I1120 21:22:02.372386  555241 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:22:02.388471  555241 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:02Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:22:02.388548  555241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:22:02.402069  555241 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:22:02.402095  555241 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:22:02.402152  555241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:22:02.413401  555241 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:22:02.414672  555241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-936214" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:02.415523  555241 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-936214" cluster setting kubeconfig missing "old-k8s-version-936214" context setting]
	I1120 21:22:02.416660  555241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:02.418979  555241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:22:02.429351  555241 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1120 21:22:02.429496  555241 kubeadm.go:602] duration metric: took 27.393938ms to restartPrimaryControlPlane
	I1120 21:22:02.429512  555241 kubeadm.go:403] duration metric: took 95.672159ms to StartCluster
	I1120 21:22:02.429530  555241 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:02.429593  555241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:02.431645  555241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:02.431918  555241 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:22:02.432130  555241 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:22:02.432258  555241 addons.go:70] Setting dashboard=true in profile "old-k8s-version-936214"
	I1120 21:22:02.432243  555241 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-936214"
	I1120 21:22:02.432280  555241 addons.go:239] Setting addon dashboard=true in "old-k8s-version-936214"
	I1120 21:22:02.432285  555241 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-936214"
	W1120 21:22:02.432290  555241 addons.go:248] addon dashboard should already be in state true
	I1120 21:22:02.432296  555241 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-936214"
	I1120 21:22:02.432330  555241 host.go:66] Checking if "old-k8s-version-936214" exists ...
	I1120 21:22:02.432390  555241 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:22:02.432668  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:22:02.432884  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:22:02.432280  555241 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-936214"
	W1120 21:22:02.432990  555241 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:22:02.433031  555241 host.go:66] Checking if "old-k8s-version-936214" exists ...
	I1120 21:22:02.433545  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:22:02.434829  555241 out.go:179] * Verifying Kubernetes components...
	I1120 21:22:02.439332  555241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:02.468236  555241 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:22:02.470339  555241 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 21:22:02.470485  555241 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:22:02.470508  555241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:22:02.471037  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:22:02.472955  555241 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1120 21:21:59.500798  545013 node_ready.go:57] node "embed-certs-714571" has "Ready":"False" status (will retry)
	W1120 21:22:01.501021  545013 node_ready.go:57] node "embed-certs-714571" has "Ready":"False" status (will retry)
	I1120 21:22:02.474001  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 21:22:02.474019  555241 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 21:22:02.474078  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:22:02.478763  555241 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-936214"
	W1120 21:22:02.478791  555241 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:22:02.478819  555241 host.go:66] Checking if "old-k8s-version-936214" exists ...
	I1120 21:22:02.479380  555241 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:22:02.522865  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:22:02.523895  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:22:02.529546  555241 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:22:02.529619  555241 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:22:02.529730  555241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:22:02.567393  555241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:22:02.651064  555241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:22:02.670348  555241 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-936214" to be "Ready" ...
	I1120 21:22:02.691091  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 21:22:02.691121  555241 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 21:22:02.692587  555241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:22:02.722719  555241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:22:02.724616  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 21:22:02.724648  555241 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 21:22:02.759364  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 21:22:02.759398  555241 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 21:22:02.779350  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 21:22:02.779379  555241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 21:22:02.822508  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 21:22:02.822538  555241 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 21:22:02.848160  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 21:22:02.848187  555241 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 21:22:02.866522  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 21:22:02.866545  555241 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 21:22:02.885210  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 21:22:02.885317  555241 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 21:22:02.901929  555241 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:22:02.902002  555241 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 21:22:02.918157  555241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:22:04.485832  555241 node_ready.go:49] node "old-k8s-version-936214" is "Ready"
	I1120 21:22:04.485870  555241 node_ready.go:38] duration metric: took 1.815488414s for node "old-k8s-version-936214" to be "Ready" ...
	I1120 21:22:04.485888  555241 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:22:04.485957  555241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:22:05.273181  555241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.580544541s)
	I1120 21:22:05.273290  555241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.550532372s)
	I1120 21:22:05.690399  555241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.772190491s)
	I1120 21:22:05.690545  555241 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.204558382s)
	I1120 21:22:05.690651  555241 api_server.go:72] duration metric: took 3.25864189s to wait for apiserver process to appear ...
	I1120 21:22:05.690666  555241 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:22:05.690690  555241 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:22:05.692744  555241 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-936214 addons enable metrics-server
	
	I1120 21:22:05.694386  555241 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1120 21:22:07.440492  552911 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:22:07.440556  552911 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:22:07.440667  552911 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:22:07.440746  552911 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 21:22:07.440798  552911 kubeadm.go:319] OS: Linux
	I1120 21:22:07.440844  552911 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:22:07.440888  552911 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:22:07.440960  552911 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:22:07.441032  552911 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:22:07.441102  552911 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:22:07.441176  552911 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:22:07.441263  552911 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:22:07.441337  552911 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 21:22:07.441446  552911 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:22:07.441561  552911 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:22:07.441668  552911 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:22:07.441762  552911 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:22:07.443315  552911 out.go:252]   - Generating certificates and keys ...
	I1120 21:22:07.443408  552911 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:22:07.443505  552911 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:22:07.443635  552911 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:22:07.443714  552911 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:22:07.443806  552911 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:22:07.443865  552911 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:22:07.443909  552911 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:22:07.444038  552911 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-454524 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:22:07.444124  552911 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:22:07.444340  552911 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-454524 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1120 21:22:07.444426  552911 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:22:07.444481  552911 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:22:07.444537  552911 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:22:07.444590  552911 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:22:07.444654  552911 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:22:07.444699  552911 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:22:07.444752  552911 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:22:07.444805  552911 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:22:07.444849  552911 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:22:07.444968  552911 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:22:07.445082  552911 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:22:07.446596  552911 out.go:252]   - Booting up control plane ...
	I1120 21:22:07.446684  552911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:22:07.446755  552911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:22:07.446813  552911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:22:07.446903  552911 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:22:07.447001  552911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:22:07.447110  552911 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:22:07.447182  552911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:22:07.447226  552911 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:22:07.447350  552911 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:22:07.447500  552911 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:22:07.447586  552911 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501770259s
	I1120 21:22:07.447703  552911 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:22:07.447806  552911 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1120 21:22:07.447934  552911 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:22:07.448047  552911 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:22:07.448154  552911 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.942312598s
	I1120 21:22:07.448309  552911 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.324125475s
	I1120 21:22:07.448366  552911 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002402413s
	I1120 21:22:07.448458  552911 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:22:07.448558  552911 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:22:07.448666  552911 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:22:07.448844  552911 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-454524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:22:07.448892  552911 kubeadm.go:319] [bootstrap-token] Using token: 0tz54x.uc15bra2o76ar1aj
	I1120 21:22:07.450160  552911 out.go:252]   - Configuring RBAC rules ...
	I1120 21:22:07.450269  552911 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:22:07.450338  552911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:22:07.450468  552911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:22:07.450606  552911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:22:07.450702  552911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:22:07.450769  552911 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:22:07.450862  552911 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:22:07.450898  552911 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:22:07.450977  552911 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:22:07.450987  552911 kubeadm.go:319] 
	I1120 21:22:07.451070  552911 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:22:07.451078  552911 kubeadm.go:319] 
	I1120 21:22:07.451175  552911 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:22:07.451183  552911 kubeadm.go:319] 
	I1120 21:22:07.451252  552911 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:22:07.451335  552911 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:22:07.451420  552911 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:22:07.451428  552911 kubeadm.go:319] 
	I1120 21:22:07.451503  552911 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:22:07.451512  552911 kubeadm.go:319] 
	I1120 21:22:07.451592  552911 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:22:07.451609  552911 kubeadm.go:319] 
	I1120 21:22:07.451688  552911 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:22:07.451787  552911 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:22:07.451881  552911 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:22:07.451891  552911 kubeadm.go:319] 
	I1120 21:22:07.451998  552911 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:22:07.452104  552911 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:22:07.452112  552911 kubeadm.go:319] 
	I1120 21:22:07.452230  552911 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 0tz54x.uc15bra2o76ar1aj \
	I1120 21:22:07.452376  552911 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d \
	I1120 21:22:07.452427  552911 kubeadm.go:319] 	--control-plane 
	I1120 21:22:07.452440  552911 kubeadm.go:319] 
	I1120 21:22:07.452509  552911 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:22:07.452516  552911 kubeadm.go:319] 
	I1120 21:22:07.452604  552911 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 0tz54x.uc15bra2o76ar1aj \
	I1120 21:22:07.452702  552911 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d 
	I1120 21:22:07.452714  552911 cni.go:84] Creating CNI manager for ""
	I1120 21:22:07.452721  552911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:22:07.454861  552911 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1120 21:22:04.001099  545013 node_ready.go:57] node "embed-certs-714571" has "Ready":"False" status (will retry)
	I1120 21:22:06.501008  545013 node_ready.go:49] node "embed-certs-714571" is "Ready"
	I1120 21:22:06.501052  545013 node_ready.go:38] duration metric: took 11.004102176s for node "embed-certs-714571" to be "Ready" ...
	I1120 21:22:06.501109  545013 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:22:06.501204  545013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:22:06.515794  545013 api_server.go:72] duration metric: took 11.483357247s to wait for apiserver process to appear ...
	I1120 21:22:06.515824  545013 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:22:06.515846  545013 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1120 21:22:06.520172  545013 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1120 21:22:06.521432  545013 api_server.go:141] control plane version: v1.34.1
	I1120 21:22:06.521460  545013 api_server.go:131] duration metric: took 5.629869ms to wait for apiserver health ...
	I1120 21:22:06.521472  545013 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:22:06.525059  545013 system_pods.go:59] 8 kube-system pods found
	I1120 21:22:06.525104  545013 system_pods.go:61] "coredns-66bc5c9577-g47lf" [16cf09bd-2e55-45c9-bf4a-2fe540e25d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:06.525112  545013 system_pods.go:61] "etcd-embed-certs-714571" [915a3f64-e4ff-4e79-8e0e-77eb772f32bb] Running
	I1120 21:22:06.525118  545013 system_pods.go:61] "kindnet-5ctwj" [ff958987-b086-4e34-90b6-52529cde3bc6] Running
	I1120 21:22:06.525123  545013 system_pods.go:61] "kube-apiserver-embed-certs-714571" [8438bedc-3f5e-4a56-858b-91fa8d08dc6a] Running
	I1120 21:22:06.525128  545013 system_pods.go:61] "kube-controller-manager-embed-certs-714571" [abd4308d-311a-4612-bbd0-d80d7fbadfc9] Running
	I1120 21:22:06.525132  545013 system_pods.go:61] "kube-proxy-nlj6n" [1b45af7f-c118-45b1-9890-b698758957be] Running
	I1120 21:22:06.525135  545013 system_pods.go:61] "kube-scheduler-embed-certs-714571" [b308bf99-816d-4b34-81bd-c06e77c249e8] Running
	I1120 21:22:06.525139  545013 system_pods.go:61] "storage-provisioner" [24446e04-e9a1-4fb2-80d1-6e96d13cbf06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:06.525145  545013 system_pods.go:74] duration metric: took 3.667862ms to wait for pod list to return data ...
	I1120 21:22:06.525154  545013 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:22:06.527900  545013 default_sa.go:45] found service account: "default"
	I1120 21:22:06.527919  545013 default_sa.go:55] duration metric: took 2.759965ms for default service account to be created ...
	I1120 21:22:06.527934  545013 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:22:06.530543  545013 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:06.530568  545013 system_pods.go:89] "coredns-66bc5c9577-g47lf" [16cf09bd-2e55-45c9-bf4a-2fe540e25d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:06.530574  545013 system_pods.go:89] "etcd-embed-certs-714571" [915a3f64-e4ff-4e79-8e0e-77eb772f32bb] Running
	I1120 21:22:06.530580  545013 system_pods.go:89] "kindnet-5ctwj" [ff958987-b086-4e34-90b6-52529cde3bc6] Running
	I1120 21:22:06.530583  545013 system_pods.go:89] "kube-apiserver-embed-certs-714571" [8438bedc-3f5e-4a56-858b-91fa8d08dc6a] Running
	I1120 21:22:06.530587  545013 system_pods.go:89] "kube-controller-manager-embed-certs-714571" [abd4308d-311a-4612-bbd0-d80d7fbadfc9] Running
	I1120 21:22:06.530590  545013 system_pods.go:89] "kube-proxy-nlj6n" [1b45af7f-c118-45b1-9890-b698758957be] Running
	I1120 21:22:06.530593  545013 system_pods.go:89] "kube-scheduler-embed-certs-714571" [b308bf99-816d-4b34-81bd-c06e77c249e8] Running
	I1120 21:22:06.530598  545013 system_pods.go:89] "storage-provisioner" [24446e04-e9a1-4fb2-80d1-6e96d13cbf06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:06.530619  545013 retry.go:31] will retry after 206.053435ms: missing components: kube-dns
	I1120 21:22:06.740521  545013 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:06.740553  545013 system_pods.go:89] "coredns-66bc5c9577-g47lf" [16cf09bd-2e55-45c9-bf4a-2fe540e25d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:06.740559  545013 system_pods.go:89] "etcd-embed-certs-714571" [915a3f64-e4ff-4e79-8e0e-77eb772f32bb] Running
	I1120 21:22:06.740565  545013 system_pods.go:89] "kindnet-5ctwj" [ff958987-b086-4e34-90b6-52529cde3bc6] Running
	I1120 21:22:06.740568  545013 system_pods.go:89] "kube-apiserver-embed-certs-714571" [8438bedc-3f5e-4a56-858b-91fa8d08dc6a] Running
	I1120 21:22:06.740572  545013 system_pods.go:89] "kube-controller-manager-embed-certs-714571" [abd4308d-311a-4612-bbd0-d80d7fbadfc9] Running
	I1120 21:22:06.740577  545013 system_pods.go:89] "kube-proxy-nlj6n" [1b45af7f-c118-45b1-9890-b698758957be] Running
	I1120 21:22:06.740581  545013 system_pods.go:89] "kube-scheduler-embed-certs-714571" [b308bf99-816d-4b34-81bd-c06e77c249e8] Running
	I1120 21:22:06.740585  545013 system_pods.go:89] "storage-provisioner" [24446e04-e9a1-4fb2-80d1-6e96d13cbf06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:06.740601  545013 retry.go:31] will retry after 287.938548ms: missing components: kube-dns
	I1120 21:22:07.032762  545013 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:07.032794  545013 system_pods.go:89] "coredns-66bc5c9577-g47lf" [16cf09bd-2e55-45c9-bf4a-2fe540e25d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:07.032800  545013 system_pods.go:89] "etcd-embed-certs-714571" [915a3f64-e4ff-4e79-8e0e-77eb772f32bb] Running
	I1120 21:22:07.032806  545013 system_pods.go:89] "kindnet-5ctwj" [ff958987-b086-4e34-90b6-52529cde3bc6] Running
	I1120 21:22:07.032809  545013 system_pods.go:89] "kube-apiserver-embed-certs-714571" [8438bedc-3f5e-4a56-858b-91fa8d08dc6a] Running
	I1120 21:22:07.032813  545013 system_pods.go:89] "kube-controller-manager-embed-certs-714571" [abd4308d-311a-4612-bbd0-d80d7fbadfc9] Running
	I1120 21:22:07.032844  545013 system_pods.go:89] "kube-proxy-nlj6n" [1b45af7f-c118-45b1-9890-b698758957be] Running
	I1120 21:22:07.032855  545013 system_pods.go:89] "kube-scheduler-embed-certs-714571" [b308bf99-816d-4b34-81bd-c06e77c249e8] Running
	I1120 21:22:07.032860  545013 system_pods.go:89] "storage-provisioner" [24446e04-e9a1-4fb2-80d1-6e96d13cbf06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:07.032875  545013 retry.go:31] will retry after 365.613162ms: missing components: kube-dns
	I1120 21:22:07.402969  545013 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:07.403011  545013 system_pods.go:89] "coredns-66bc5c9577-g47lf" [16cf09bd-2e55-45c9-bf4a-2fe540e25d19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:07.403019  545013 system_pods.go:89] "etcd-embed-certs-714571" [915a3f64-e4ff-4e79-8e0e-77eb772f32bb] Running
	I1120 21:22:07.403025  545013 system_pods.go:89] "kindnet-5ctwj" [ff958987-b086-4e34-90b6-52529cde3bc6] Running
	I1120 21:22:07.403051  545013 system_pods.go:89] "kube-apiserver-embed-certs-714571" [8438bedc-3f5e-4a56-858b-91fa8d08dc6a] Running
	I1120 21:22:07.403066  545013 system_pods.go:89] "kube-controller-manager-embed-certs-714571" [abd4308d-311a-4612-bbd0-d80d7fbadfc9] Running
	I1120 21:22:07.403072  545013 system_pods.go:89] "kube-proxy-nlj6n" [1b45af7f-c118-45b1-9890-b698758957be] Running
	I1120 21:22:07.403077  545013 system_pods.go:89] "kube-scheduler-embed-certs-714571" [b308bf99-816d-4b34-81bd-c06e77c249e8] Running
	I1120 21:22:07.403089  545013 system_pods.go:89] "storage-provisioner" [24446e04-e9a1-4fb2-80d1-6e96d13cbf06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:07.403110  545013 retry.go:31] will retry after 369.366966ms: missing components: kube-dns
	I1120 21:22:07.779641  545013 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:07.779685  545013 system_pods.go:89] "coredns-66bc5c9577-g47lf" [16cf09bd-2e55-45c9-bf4a-2fe540e25d19] Running
	I1120 21:22:07.779694  545013 system_pods.go:89] "etcd-embed-certs-714571" [915a3f64-e4ff-4e79-8e0e-77eb772f32bb] Running
	I1120 21:22:07.779701  545013 system_pods.go:89] "kindnet-5ctwj" [ff958987-b086-4e34-90b6-52529cde3bc6] Running
	I1120 21:22:07.779709  545013 system_pods.go:89] "kube-apiserver-embed-certs-714571" [8438bedc-3f5e-4a56-858b-91fa8d08dc6a] Running
	I1120 21:22:07.779716  545013 system_pods.go:89] "kube-controller-manager-embed-certs-714571" [abd4308d-311a-4612-bbd0-d80d7fbadfc9] Running
	I1120 21:22:07.779722  545013 system_pods.go:89] "kube-proxy-nlj6n" [1b45af7f-c118-45b1-9890-b698758957be] Running
	I1120 21:22:07.779728  545013 system_pods.go:89] "kube-scheduler-embed-certs-714571" [b308bf99-816d-4b34-81bd-c06e77c249e8] Running
	I1120 21:22:07.779734  545013 system_pods.go:89] "storage-provisioner" [24446e04-e9a1-4fb2-80d1-6e96d13cbf06] Running
	I1120 21:22:07.779746  545013 system_pods.go:126] duration metric: took 1.251805695s to wait for k8s-apps to be running ...
	I1120 21:22:07.779758  545013 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:22:07.779829  545013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:07.799639  545013 system_svc.go:56] duration metric: took 19.869713ms WaitForService to wait for kubelet
	I1120 21:22:07.799794  545013 kubeadm.go:587] duration metric: took 12.767364534s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:07.799838  545013 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:22:07.804050  545013 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:22:07.804088  545013 node_conditions.go:123] node cpu capacity is 8
	I1120 21:22:07.804105  545013 node_conditions.go:105] duration metric: took 4.251074ms to run NodePressure ...
	I1120 21:22:07.804264  545013 start.go:242] waiting for startup goroutines ...
	I1120 21:22:07.804280  545013 start.go:247] waiting for cluster config update ...
	I1120 21:22:07.804294  545013 start.go:256] writing updated cluster config ...
	I1120 21:22:07.804642  545013 ssh_runner.go:195] Run: rm -f paused
	I1120 21:22:07.809897  545013 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:22:07.815290  545013 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g47lf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:07.820962  545013 pod_ready.go:94] pod "coredns-66bc5c9577-g47lf" is "Ready"
	I1120 21:22:07.820990  545013 pod_ready.go:86] duration metric: took 5.670642ms for pod "coredns-66bc5c9577-g47lf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:07.823920  545013 pod_ready.go:83] waiting for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:07.829557  545013 pod_ready.go:94] pod "etcd-embed-certs-714571" is "Ready"
	I1120 21:22:07.829586  545013 pod_ready.go:86] duration metric: took 5.638655ms for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:07.832044  545013 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:07.836757  545013 pod_ready.go:94] pod "kube-apiserver-embed-certs-714571" is "Ready"
	I1120 21:22:07.836783  545013 pod_ready.go:86] duration metric: took 4.685201ms for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:07.838959  545013 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:08.215033  545013 pod_ready.go:94] pod "kube-controller-manager-embed-certs-714571" is "Ready"
	I1120 21:22:08.215060  545013 pod_ready.go:86] duration metric: took 376.074612ms for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:08.416062  545013 pod_ready.go:83] waiting for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:08.815660  545013 pod_ready.go:94] pod "kube-proxy-nlj6n" is "Ready"
	I1120 21:22:08.815697  545013 pod_ready.go:86] duration metric: took 399.60431ms for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:09.015452  545013 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:09.415106  545013 pod_ready.go:94] pod "kube-scheduler-embed-certs-714571" is "Ready"
	I1120 21:22:09.415139  545013 pod_ready.go:86] duration metric: took 399.660413ms for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:09.415154  545013 pod_ready.go:40] duration metric: took 1.605216473s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:22:09.460894  545013 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:22:09.462768  545013 out.go:179] * Done! kubectl is now configured to use "embed-certs-714571" cluster and "default" namespace by default
	I1120 21:22:05.695706  555241 addons.go:515] duration metric: took 3.263575548s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1120 21:22:05.699512  555241 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:22:05.701107  555241 api_server.go:141] control plane version: v1.28.0
	I1120 21:22:05.701138  555241 api_server.go:131] duration metric: took 10.464486ms to wait for apiserver health ...
	I1120 21:22:05.701151  555241 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:22:05.707402  555241 system_pods.go:59] 8 kube-system pods found
	I1120 21:22:05.707474  555241 system_pods.go:61] "coredns-5dd5756b68-5t2cr" [3f5376b3-6d7d-4564-9dc0-d27a0882903a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:05.707490  555241 system_pods.go:61] "etcd-old-k8s-version-936214" [3d9fb3fe-2cfd-411c-8b81-ae41534e2e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:22:05.707504  555241 system_pods.go:61] "kindnet-949k6" [d19f5da9-8bc8-46f6-a8d5-25503820d80d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:22:05.707514  555241 system_pods.go:61] "kube-apiserver-old-k8s-version-936214" [bfe66a0e-d1c8-4bb1-a09a-8828044a9126] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:22:05.707523  555241 system_pods.go:61] "kube-controller-manager-old-k8s-version-936214" [7c3a06aa-791a-4cdf-8e5d-8f6768b0a5bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:22:05.707532  555241 system_pods.go:61] "kube-proxy-z9sk2" [9bc52d10-b8b8-4805-ae1b-cbae97dc25ad] Running
	I1120 21:22:05.707550  555241 system_pods.go:61] "kube-scheduler-old-k8s-version-936214" [a2928b34-6224-420a-bdeb-6f2b2ad1d40f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:22:05.707560  555241 system_pods.go:61] "storage-provisioner" [cf765557-656c-4944-bd0d-2cd578d3e885] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:05.707573  555241 system_pods.go:74] duration metric: took 6.412439ms to wait for pod list to return data ...
	I1120 21:22:05.707587  555241 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:22:05.712073  555241 default_sa.go:45] found service account: "default"
	I1120 21:22:05.712101  555241 default_sa.go:55] duration metric: took 4.505263ms for default service account to be created ...
	I1120 21:22:05.712173  555241 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:22:05.717114  555241 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:05.717152  555241 system_pods.go:89] "coredns-5dd5756b68-5t2cr" [3f5376b3-6d7d-4564-9dc0-d27a0882903a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:05.717165  555241 system_pods.go:89] "etcd-old-k8s-version-936214" [3d9fb3fe-2cfd-411c-8b81-ae41534e2e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:22:05.717174  555241 system_pods.go:89] "kindnet-949k6" [d19f5da9-8bc8-46f6-a8d5-25503820d80d] Running
	I1120 21:22:05.717183  555241 system_pods.go:89] "kube-apiserver-old-k8s-version-936214" [bfe66a0e-d1c8-4bb1-a09a-8828044a9126] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:22:05.717191  555241 system_pods.go:89] "kube-controller-manager-old-k8s-version-936214" [7c3a06aa-791a-4cdf-8e5d-8f6768b0a5bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:22:05.717197  555241 system_pods.go:89] "kube-proxy-z9sk2" [9bc52d10-b8b8-4805-ae1b-cbae97dc25ad] Running
	I1120 21:22:05.717204  555241 system_pods.go:89] "kube-scheduler-old-k8s-version-936214" [a2928b34-6224-420a-bdeb-6f2b2ad1d40f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:22:05.717248  555241 system_pods.go:89] "storage-provisioner" [cf765557-656c-4944-bd0d-2cd578d3e885] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:05.717265  555241 system_pods.go:126] duration metric: took 5.079431ms to wait for k8s-apps to be running ...
	I1120 21:22:05.717277  555241 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:22:05.717334  555241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:05.736319  555241 system_svc.go:56] duration metric: took 19.031267ms WaitForService to wait for kubelet
	I1120 21:22:05.736355  555241 kubeadm.go:587] duration metric: took 3.304406951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:05.736381  555241 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:22:05.739425  555241 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:22:05.739453  555241 node_conditions.go:123] node cpu capacity is 8
	I1120 21:22:05.739466  555241 node_conditions.go:105] duration metric: took 3.080165ms to run NodePressure ...
	I1120 21:22:05.739479  555241 start.go:242] waiting for startup goroutines ...
	I1120 21:22:05.739485  555241 start.go:247] waiting for cluster config update ...
	I1120 21:22:05.739496  555241 start.go:256] writing updated cluster config ...
	I1120 21:22:05.739784  555241 ssh_runner.go:195] Run: rm -f paused
	I1120 21:22:05.744940  555241 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:22:05.750284  555241 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5t2cr" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:22:07.757087  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	I1120 21:22:07.456165  552911 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:22:07.460958  552911 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:22:07.460976  552911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:22:07.475310  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:22:07.711889  552911 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:22:07.711934  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:07.712002  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-454524 minikube.k8s.io/updated_at=2025_11_20T21_22_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=default-k8s-diff-port-454524 minikube.k8s.io/primary=true
	I1120 21:22:07.805146  552911 ops.go:34] apiserver oom_adj: -16
	I1120 21:22:07.805289  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:08.305722  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:08.805902  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:09.306284  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:09.806057  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:10.306091  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:10.806342  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:11.305371  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:11.805475  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:12.305982  552911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:22:12.377582  552911 kubeadm.go:1114] duration metric: took 4.665696889s to wait for elevateKubeSystemPrivileges
	I1120 21:22:12.377619  552911 kubeadm.go:403] duration metric: took 16.37487755s to StartCluster
	I1120 21:22:12.377637  552911 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:12.377711  552911 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:12.379503  552911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:12.379743  552911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:22:12.379759  552911 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:22:12.379827  552911 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:22:12.379933  552911 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-454524"
	I1120 21:22:12.379957  552911 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-454524"
	I1120 21:22:12.379992  552911 host.go:66] Checking if "default-k8s-diff-port-454524" exists ...
	I1120 21:22:12.380047  552911 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:12.380085  552911 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-454524"
	I1120 21:22:12.380113  552911 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-454524"
	I1120 21:22:12.380471  552911 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:22:12.380503  552911 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:22:12.381253  552911 out.go:179] * Verifying Kubernetes components...
	I1120 21:22:12.382378  552911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:12.404314  552911 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:22:12.405569  552911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:22:12.405589  552911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:22:12.405644  552911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:12.406711  552911 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-454524"
	I1120 21:22:12.406756  552911 host.go:66] Checking if "default-k8s-diff-port-454524" exists ...
	I1120 21:22:12.407273  552911 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:22:12.433081  552911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/default-k8s-diff-port-454524/id_rsa Username:docker}
	I1120 21:22:12.437186  552911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:22:12.437211  552911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:22:12.437290  552911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:12.460399  552911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/default-k8s-diff-port-454524/id_rsa Username:docker}
	I1120 21:22:12.472945  552911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:22:12.526665  552911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:22:12.556483  552911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:22:12.578408  552911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:22:12.661478  552911 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1120 21:22:12.663165  552911 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-454524" to be "Ready" ...
	I1120 21:22:12.892424  552911 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1120 21:22:10.256613  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:12.257428  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	I1120 21:22:12.893510  552911 addons.go:515] duration metric: took 513.678902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:22:13.166889  552911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-454524" context rescaled to 1 replicas
	W1120 21:22:14.667150  552911 node_ready.go:57] node "default-k8s-diff-port-454524" has "Ready":"False" status (will retry)
	W1120 21:22:14.756720  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:17.256419  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:19.260556  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:06 embed-certs-714571 crio[775]: time="2025-11-20T21:22:06.65829416Z" level=info msg="Started container" PID=1844 containerID=3fa9135ac95478f4778ce21956196f6631233b60605c3666c9252934e56e44ec description=kube-system/storage-provisioner/storage-provisioner id=38232176-b444-4418-b5e4-7892ff70bc74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc95041ebae242b91f236bee71b31470dd92d08c45ef2faa1cd6e42e01549750
	Nov 20 21:22:06 embed-certs-714571 crio[775]: time="2025-11-20T21:22:06.66073816Z" level=info msg="Started container" PID=1847 containerID=d827309fc86eb7d6d72d98cb3034340df3cb11a54e6c5a043ea845ff34a2ae0d description=kube-system/coredns-66bc5c9577-g47lf/coredns id=6e43846c-aa19-40c5-a7d4-2b1e42c75880 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a1d9bfa9bf353dac430a73414cd3f9678a19b8e0f1e9a1f9e51b21513217713
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.911678529Z" level=info msg="Running pod sandbox: default/busybox/POD" id=87f9ea7a-49c6-49fa-91eb-3a987e991f36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.911771111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.917236586Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:401afa9e2c7fa94bcd947c2b5e6ab737764f5838bd92eaf4c85773d27675396c UID:2f0d580b-0733-4eca-994c-f26f9f207bcc NetNS:/var/run/netns/b9ca27bc-01c8-4bc6-940d-d7c9580046c1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000330350}] Aliases:map[]}"
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.917267515Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.927140172Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:401afa9e2c7fa94bcd947c2b5e6ab737764f5838bd92eaf4c85773d27675396c UID:2f0d580b-0733-4eca-994c-f26f9f207bcc NetNS:/var/run/netns/b9ca27bc-01c8-4bc6-940d-d7c9580046c1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000330350}] Aliases:map[]}"
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.927328434Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.928204982Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.929280296Z" level=info msg="Ran pod sandbox 401afa9e2c7fa94bcd947c2b5e6ab737764f5838bd92eaf4c85773d27675396c with infra container: default/busybox/POD" id=87f9ea7a-49c6-49fa-91eb-3a987e991f36 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.930595837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=462fb4c7-15b7-44c7-8cc8-6297cfcf78d1 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.930741467Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=462fb4c7-15b7-44c7-8cc8-6297cfcf78d1 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.930779755Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=462fb4c7-15b7-44c7-8cc8-6297cfcf78d1 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.931726508Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=501ba35c-6a84-4aef-85fe-bfde4b37f96f name=/runtime.v1.ImageService/PullImage
	Nov 20 21:22:09 embed-certs-714571 crio[775]: time="2025-11-20T21:22:09.934017687Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.197531232Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=501ba35c-6a84-4aef-85fe-bfde4b37f96f name=/runtime.v1.ImageService/PullImage
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.198346577Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4f9215a-469b-4d25-9b90-8b3cc5867ebc name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.200018662Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4cda3dd3-1e16-4a59-b5df-b1efd895b447 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.203874953Z" level=info msg="Creating container: default/busybox/busybox" id=4ce4952f-a354-47e8-a196-da5905d52271 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.203983196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.207432743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.20783563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.253808303Z" level=info msg="Created container 5a7564548ee5d7f7667ec9a0d6e5efd5aac10463a092aff3cb1841237b7fc228: default/busybox/busybox" id=4ce4952f-a354-47e8-a196-da5905d52271 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.254503054Z" level=info msg="Starting container: 5a7564548ee5d7f7667ec9a0d6e5efd5aac10463a092aff3cb1841237b7fc228" id=308cfe01-7a74-4a14-8878-18a416bf056f name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:12 embed-certs-714571 crio[775]: time="2025-11-20T21:22:12.25650268Z" level=info msg="Started container" PID=1920 containerID=5a7564548ee5d7f7667ec9a0d6e5efd5aac10463a092aff3cb1841237b7fc228 description=default/busybox/busybox id=308cfe01-7a74-4a14-8878-18a416bf056f name=/runtime.v1.RuntimeService/StartContainer sandboxID=401afa9e2c7fa94bcd947c2b5e6ab737764f5838bd92eaf4c85773d27675396c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	5a7564548ee5d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   401afa9e2c7fa       busybox                                      default
	d827309fc86eb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   5a1d9bfa9bf35       coredns-66bc5c9577-g47lf                     kube-system
	3fa9135ac9547       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   fc95041ebae24       storage-provisioner                          kube-system
	c21079b68f372       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   80b1f4867e099       kindnet-5ctwj                                kube-system
	8f2fb850c583b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   973f6d4da00d6       kube-proxy-nlj6n                             kube-system
	75e2e6ff60307       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   ec1a187c28944       kube-apiserver-embed-certs-714571            kube-system
	b5284d32e96dc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   332af591e168c       kube-controller-manager-embed-certs-714571   kube-system
	e6eb33873727d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   2bb9426fc7b12       etcd-embed-certs-714571                      kube-system
	9f97b73efb24d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   f80ad0c1673b4       kube-scheduler-embed-certs-714571            kube-system
	
	
	==> coredns [d827309fc86eb7d6d72d98cb3034340df3cb11a54e6c5a043ea845ff34a2ae0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49508 - 894 "HINFO IN 2039333577803156717.9215892703635676058. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01694828s
	
	
	==> describe nodes <==
	Name:               embed-certs-714571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-714571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-714571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_21_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-714571
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:22:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:22:06 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:22:06 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:22:06 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:22:06 +0000   Thu, 20 Nov 2025 21:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-714571
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                b8c8edd2-d291-40b8-8776-13cdc9b6d9a8
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-g47lf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-714571                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-5ctwj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-714571             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-714571    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-nlj6n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-714571             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 37s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 37s)  kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 37s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node embed-certs-714571 event: Registered Node embed-certs-714571 in Controller
	  Normal  NodeReady                14s                kubelet          Node embed-certs-714571 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [e6eb33873727d13a496ca9d27d431223bd2a23a1d1cc4aed001072e5fc07006e] <==
	{"level":"info","ts":"2025-11-20T21:21:49.209649Z","caller":"traceutil/trace.go:172","msg":"trace[1171816268] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:234; }","duration":"248.428904ms","start":"2025-11-20T21:21:48.961201Z","end":"2025-11-20T21:21:49.209629Z","steps":["trace[1171816268] 'agreement among raft nodes before linearized reading'  (duration: 92.792254ms)","trace[1171816268] 'range keys from in-memory index tree'  (duration: 155.490605ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:49.209729Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.64882ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356781582282555 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-714571.1879d3e3b188d554\" mod_revision:225 > success:<request_put:<key:\"/registry/events/default/embed-certs-714571.1879d3e3b188d554\" value_size:630 lease:6414984744727506669 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-714571.1879d3e3b188d554\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-20T21:21:49.209895Z","caller":"traceutil/trace.go:172","msg":"trace[1681495549] transaction","detail":"{read_only:false; response_revision:236; number_of_response:1; }","duration":"248.914884ms","start":"2025-11-20T21:21:48.960968Z","end":"2025-11-20T21:21:49.209883Z","steps":["trace[1681495549] 'process raft request'  (duration: 248.854076ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:49.209910Z","caller":"traceutil/trace.go:172","msg":"trace[1199359481] transaction","detail":"{read_only:false; response_revision:235; number_of_response:1; }","duration":"250.217285ms","start":"2025-11-20T21:21:48.959678Z","end":"2025-11-20T21:21:49.209895Z","steps":["trace[1199359481] 'process raft request'  (duration: 94.323221ms)","trace[1199359481] 'compare'  (duration: 155.538961ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:49.530794Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.83135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-714571\" limit:1 ","response":"range_response_count:1 size:4559"}
	{"level":"info","ts":"2025-11-20T21:21:49.530857Z","caller":"traceutil/trace.go:172","msg":"trace[1809165798] range","detail":"{range_begin:/registry/minions/embed-certs-714571; range_end:; response_count:1; response_revision:237; }","duration":"241.911309ms","start":"2025-11-20T21:21:49.288930Z","end":"2025-11-20T21:21:49.530841Z","steps":["trace[1809165798] 'agreement among raft nodes before linearized reading'  (duration: 83.020237ms)","trace[1809165798] 'range keys from in-memory index tree'  (duration: 158.689645ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:49.530884Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.816017ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356781582282560 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/embed-certs-714571.1879d3e3b188f413\" mod_revision:228 > success:<request_put:<key:\"/registry/events/default/embed-certs-714571.1879d3e3b188f413\" value_size:628 lease:6414984744727506669 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-714571.1879d3e3b188f413\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-20T21:21:49.530962Z","caller":"traceutil/trace.go:172","msg":"trace[757559640] linearizableReadLoop","detail":"{readStateIndex:244; appliedIndex:243; }","duration":"159.018419ms","start":"2025-11-20T21:21:49.371931Z","end":"2025-11-20T21:21:49.530949Z","steps":["trace[757559640] 'read index received'  (duration: 27.258µs)","trace[757559640] 'applied index is now lower than readState.Index'  (duration: 158.990079ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:21:49.531038Z","caller":"traceutil/trace.go:172","msg":"trace[390654335] transaction","detail":"{read_only:false; response_revision:238; number_of_response:1; }","duration":"317.210142ms","start":"2025-11-20T21:21:49.213807Z","end":"2025-11-20T21:21:49.531017Z","steps":["trace[390654335] 'process raft request'  (duration: 158.186062ms)","trace[390654335] 'compare'  (duration: 158.714475ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:49.531047Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.383536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-20T21:21:49.531131Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-20T21:21:49.213790Z","time spent":"317.288325ms","remote":"127.0.0.1:49730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/embed-certs-714571.1879d3e3b188f413\" mod_revision:228 > success:<request_put:<key:\"/registry/events/default/embed-certs-714571.1879d3e3b188f413\" value_size:628 lease:6414984744727506669 >> failure:<request_range:<key:\"/registry/events/default/embed-certs-714571.1879d3e3b188f413\" > >"}
	{"level":"info","ts":"2025-11-20T21:21:49.531189Z","caller":"traceutil/trace.go:172","msg":"trace[774538030] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslicemirroring-controller; range_end:; response_count:0; response_revision:238; }","duration":"241.517835ms","start":"2025-11-20T21:21:49.289654Z","end":"2025-11-20T21:21:49.531171Z","steps":["trace[774538030] 'agreement among raft nodes before linearized reading'  (duration: 241.346601ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:49.726715Z","caller":"traceutil/trace.go:172","msg":"trace[686628926] linearizableReadLoop","detail":"{readStateIndex:245; appliedIndex:245; }","duration":"123.053326ms","start":"2025-11-20T21:21:49.603639Z","end":"2025-11-20T21:21:49.726693Z","steps":["trace[686628926] 'read index received'  (duration: 123.045978ms)","trace[686628926] 'applied index is now lower than readState.Index'  (duration: 6.038µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:49.730043Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.382541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/embed-certs-714571.1879d3e3b188d554\" limit:1 ","response":"range_response_count:1 size:723"}
	{"level":"info","ts":"2025-11-20T21:21:49.730105Z","caller":"traceutil/trace.go:172","msg":"trace[1718386593] range","detail":"{range_begin:/registry/events/default/embed-certs-714571.1879d3e3b188d554; range_end:; response_count:1; response_revision:239; }","duration":"126.460616ms","start":"2025-11-20T21:21:49.603629Z","end":"2025-11-20T21:21:49.730089Z","steps":["trace[1718386593] 'agreement among raft nodes before linearized reading'  (duration: 123.141788ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:49.730112Z","caller":"traceutil/trace.go:172","msg":"trace[1572270022] transaction","detail":"{read_only:false; response_revision:241; number_of_response:1; }","duration":"193.46468ms","start":"2025-11-20T21:21:49.536625Z","end":"2025-11-20T21:21:49.730090Z","steps":["trace[1572270022] 'process raft request'  (duration: 193.409214ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:49.730134Z","caller":"traceutil/trace.go:172","msg":"trace[1969012800] transaction","detail":"{read_only:false; response_revision:240; number_of_response:1; }","duration":"193.806987ms","start":"2025-11-20T21:21:49.536303Z","end":"2025-11-20T21:21:49.730110Z","steps":["trace[1969012800] 'process raft request'  (duration: 190.459085ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-20T21:21:50.038863Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.84745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/expand-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T21:21:50.038942Z","caller":"traceutil/trace.go:172","msg":"trace[77035926] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/expand-controller; range_end:; response_count:0; response_revision:249; }","duration":"129.936199ms","start":"2025-11-20T21:21:49.908980Z","end":"2025-11-20T21:21:50.038917Z","steps":["trace[77035926] 'agreement among raft nodes before linearized reading'  (duration: 80.287179ms)","trace[77035926] 'range keys from in-memory index tree'  (duration: 49.510256ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-20T21:21:50.039098Z","caller":"traceutil/trace.go:172","msg":"trace[719524253] transaction","detail":"{read_only:false; response_revision:251; number_of_response:1; }","duration":"148.559726ms","start":"2025-11-20T21:21:49.890524Z","end":"2025-11-20T21:21:50.039084Z","steps":["trace[719524253] 'process raft request'  (duration: 148.486757ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:21:50.039094Z","caller":"traceutil/trace.go:172","msg":"trace[1104440475] transaction","detail":"{read_only:false; response_revision:250; number_of_response:1; }","duration":"148.523168ms","start":"2025-11-20T21:21:49.890524Z","end":"2025-11-20T21:21:50.039047Z","steps":["trace[1104440475] 'process raft request'  (duration: 98.757829ms)","trace[1104440475] 'compare'  (duration: 49.525094ms)"],"step_count":2}
	2025/11/20 21:21:50 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-11-20T21:21:50.325006Z","caller":"traceutil/trace.go:172","msg":"trace[1023460043] linearizableReadLoop","detail":"{readStateIndex:269; appliedIndex:269; }","duration":"118.751514ms","start":"2025-11-20T21:21:50.206235Z","end":"2025-11-20T21:21:50.324987Z","steps":["trace[1023460043] 'read index received'  (duration: 118.742232ms)","trace[1023460043] 'applied index is now lower than readState.Index'  (duration: 7.578µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-20T21:21:50.370421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.163148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-20T21:21:50.370486Z","caller":"traceutil/trace.go:172","msg":"trace[515967765] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:0; response_revision:263; }","duration":"164.239705ms","start":"2025-11-20T21:21:50.206229Z","end":"2025-11-20T21:21:50.370469Z","steps":["trace[515967765] 'agreement among raft nodes before linearized reading'  (duration: 118.827189ms)","trace[515967765] 'range keys from in-memory index tree'  (duration: 45.30641ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:22:20 up  4:04,  0 user,  load average: 5.35, 4.85, 2.90
	Linux embed-certs-714571 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c21079b68f3724c9f8a789e06c187d1b11271f603709aaff51dcf344a58e10fb] <==
	I1120 21:21:55.590409       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:21:55.590680       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:21:55.590883       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:21:55.590910       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:21:55.590946       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:21:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:21:55.834707       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:21:55.834756       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:21:55.834771       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:21:55.834925       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:21:56.234859       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:21:56.286588       1 metrics.go:72] Registering metrics
	I1120 21:21:56.286731       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:05.799338       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:22:05.799398       1 main.go:301] handling current node
	I1120 21:22:15.797322       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:22:15.797369       1 main.go:301] handling current node
	
	
	==> kube-apiserver [75e2e6ff603074a9e61fb012bec4a8f11d30a4630e7bc52c58c916cb5d583983] <==
	I1120 21:21:46.964829       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:21:47.563721       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:21:47.568016       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:21:47.568040       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:21:48.138153       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:21:48.183657       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:21:48.409622       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:21:48.719385       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1120 21:21:48.719696       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:21:48.720641       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:21:48.728963       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	{"level":"warn","ts":"2025-11-20T21:21:50.321743Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f4e780/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1120 21:21:50.321909       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:21:50.321945       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1120 21:21:50.321945       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.493µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1120 21:21:50.323148       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1120 21:21:50.323275       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.44585ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	I1120 21:21:50.498679       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:21:50.511235       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:21:50.523444       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:21:54.740620       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:21:54.791936       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:21:54.798380       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:21:55.118028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1120 21:22:18.703258       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50400: use of closed network connection
	
	
	==> kube-controller-manager [b5284d32e96dc67bcc500243d46b9f66bfb8a64c716dc3e45ba08291af95b606] <==
	I1120 21:21:54.023454       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:21:54.033375       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:21:54.035910       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:21:54.035930       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:21:54.035946       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:21:54.037281       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:21:54.038082       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:21:54.038191       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:21:54.038238       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:21:54.038253       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:21:54.038275       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:21:54.038327       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:21:54.038364       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:21:54.038459       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:21:54.038486       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:21:54.038513       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:21:54.038471       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:21:54.038976       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:21:54.040658       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:21:54.042921       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:21:54.048091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:21:54.051545       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:21:54.070997       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:21:54.082346       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:22:08.992361       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f2fb850c583bdbf7ce3494c155de08c0c2559bf5fa3d8453e79fd69f1ce78a2] <==
	I1120 21:21:55.415924       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:21:55.504609       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:21:55.606779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:21:55.606821       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:21:55.606976       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:21:55.629057       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:21:55.629128       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:21:55.636800       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:21:55.637314       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:21:55.637460       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:21:55.643765       1 config.go:200] "Starting service config controller"
	I1120 21:21:55.644011       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:21:55.644030       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:21:55.644186       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:21:55.644308       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:21:55.646281       1 config.go:309] "Starting node config controller"
	I1120 21:21:55.646356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:21:55.646410       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:21:55.650033       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:21:55.745152       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:21:55.745202       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:21:55.751452       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9f97b73efb24dcbe22261c0406c60889a73dd83c86f7bd9efd78e75af0103950] <==
	E1120 21:21:46.623928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:21:46.624008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:21:46.624165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:21:46.624269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:21:46.624374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:21:46.624445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:21:46.624591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:21:46.624623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1120 21:21:46.624621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:21:46.624640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:21:46.624918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:21:46.625086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:21:47.484274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:21:47.521194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:21:47.580312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:21:47.726375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:21:47.782740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:21:47.784952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 21:21:47.832528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:21:47.862248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:21:47.885620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:21:47.890728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:21:47.910359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:21:47.911386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1120 21:21:50.306951       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:21:51 embed-certs-714571 kubelet[1327]: I1120 21:21:51.495125    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-714571" podStartSLOduration=3.495103801 podStartE2EDuration="3.495103801s" podCreationTimestamp="2025-11-20 21:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:51.484842682 +0000 UTC m=+1.132763479" watchObservedRunningTime="2025-11-20 21:21:51.495103801 +0000 UTC m=+1.143024598"
	Nov 20 21:21:51 embed-certs-714571 kubelet[1327]: I1120 21:21:51.495519    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-714571" podStartSLOduration=1.495502418 podStartE2EDuration="1.495502418s" podCreationTimestamp="2025-11-20 21:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:51.494800315 +0000 UTC m=+1.142721113" watchObservedRunningTime="2025-11-20 21:21:51.495502418 +0000 UTC m=+1.143423215"
	Nov 20 21:21:51 embed-certs-714571 kubelet[1327]: I1120 21:21:51.508483    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-714571" podStartSLOduration=1.508457769 podStartE2EDuration="1.508457769s" podCreationTimestamp="2025-11-20 21:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:51.508025214 +0000 UTC m=+1.155946011" watchObservedRunningTime="2025-11-20 21:21:51.508457769 +0000 UTC m=+1.156378562"
	Nov 20 21:21:51 embed-certs-714571 kubelet[1327]: I1120 21:21:51.528421    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-714571" podStartSLOduration=1.5283937619999999 podStartE2EDuration="1.528393762s" podCreationTimestamp="2025-11-20 21:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:51.518690253 +0000 UTC m=+1.166611051" watchObservedRunningTime="2025-11-20 21:21:51.528393762 +0000 UTC m=+1.176314555"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.050900    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.051648    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851385    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b45af7f-c118-45b1-9890-b698758957be-xtables-lock\") pod \"kube-proxy-nlj6n\" (UID: \"1b45af7f-c118-45b1-9890-b698758957be\") " pod="kube-system/kube-proxy-nlj6n"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851479    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4p6z\" (UniqueName: \"kubernetes.io/projected/1b45af7f-c118-45b1-9890-b698758957be-kube-api-access-s4p6z\") pod \"kube-proxy-nlj6n\" (UID: \"1b45af7f-c118-45b1-9890-b698758957be\") " pod="kube-system/kube-proxy-nlj6n"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851563    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff958987-b086-4e34-90b6-52529cde3bc6-lib-modules\") pod \"kindnet-5ctwj\" (UID: \"ff958987-b086-4e34-90b6-52529cde3bc6\") " pod="kube-system/kindnet-5ctwj"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851598    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b45af7f-c118-45b1-9890-b698758957be-kube-proxy\") pod \"kube-proxy-nlj6n\" (UID: \"1b45af7f-c118-45b1-9890-b698758957be\") " pod="kube-system/kube-proxy-nlj6n"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851619    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b45af7f-c118-45b1-9890-b698758957be-lib-modules\") pod \"kube-proxy-nlj6n\" (UID: \"1b45af7f-c118-45b1-9890-b698758957be\") " pod="kube-system/kube-proxy-nlj6n"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851639    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqhpc\" (UniqueName: \"kubernetes.io/projected/ff958987-b086-4e34-90b6-52529cde3bc6-kube-api-access-nqhpc\") pod \"kindnet-5ctwj\" (UID: \"ff958987-b086-4e34-90b6-52529cde3bc6\") " pod="kube-system/kindnet-5ctwj"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851666    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ff958987-b086-4e34-90b6-52529cde3bc6-cni-cfg\") pod \"kindnet-5ctwj\" (UID: \"ff958987-b086-4e34-90b6-52529cde3bc6\") " pod="kube-system/kindnet-5ctwj"
	Nov 20 21:21:54 embed-certs-714571 kubelet[1327]: I1120 21:21:54.851686    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff958987-b086-4e34-90b6-52529cde3bc6-xtables-lock\") pod \"kindnet-5ctwj\" (UID: \"ff958987-b086-4e34-90b6-52529cde3bc6\") " pod="kube-system/kindnet-5ctwj"
	Nov 20 21:21:55 embed-certs-714571 kubelet[1327]: I1120 21:21:55.507832    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5ctwj" podStartSLOduration=1.507808259 podStartE2EDuration="1.507808259s" podCreationTimestamp="2025-11-20 21:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:55.48946234 +0000 UTC m=+5.137383137" watchObservedRunningTime="2025-11-20 21:21:55.507808259 +0000 UTC m=+5.155729056"
	Nov 20 21:21:55 embed-certs-714571 kubelet[1327]: I1120 21:21:55.521943    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nlj6n" podStartSLOduration=1.521920958 podStartE2EDuration="1.521920958s" podCreationTimestamp="2025-11-20 21:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:21:55.52167517 +0000 UTC m=+5.169595974" watchObservedRunningTime="2025-11-20 21:21:55.521920958 +0000 UTC m=+5.169841754"
	Nov 20 21:22:06 embed-certs-714571 kubelet[1327]: I1120 21:22:06.266861    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:22:06 embed-certs-714571 kubelet[1327]: I1120 21:22:06.344778    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmm6x\" (UniqueName: \"kubernetes.io/projected/16cf09bd-2e55-45c9-bf4a-2fe540e25d19-kube-api-access-qmm6x\") pod \"coredns-66bc5c9577-g47lf\" (UID: \"16cf09bd-2e55-45c9-bf4a-2fe540e25d19\") " pod="kube-system/coredns-66bc5c9577-g47lf"
	Nov 20 21:22:06 embed-certs-714571 kubelet[1327]: I1120 21:22:06.344835    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16cf09bd-2e55-45c9-bf4a-2fe540e25d19-config-volume\") pod \"coredns-66bc5c9577-g47lf\" (UID: \"16cf09bd-2e55-45c9-bf4a-2fe540e25d19\") " pod="kube-system/coredns-66bc5c9577-g47lf"
	Nov 20 21:22:06 embed-certs-714571 kubelet[1327]: I1120 21:22:06.344867    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/24446e04-e9a1-4fb2-80d1-6e96d13cbf06-tmp\") pod \"storage-provisioner\" (UID: \"24446e04-e9a1-4fb2-80d1-6e96d13cbf06\") " pod="kube-system/storage-provisioner"
	Nov 20 21:22:06 embed-certs-714571 kubelet[1327]: I1120 21:22:06.344894    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npctc\" (UniqueName: \"kubernetes.io/projected/24446e04-e9a1-4fb2-80d1-6e96d13cbf06-kube-api-access-npctc\") pod \"storage-provisioner\" (UID: \"24446e04-e9a1-4fb2-80d1-6e96d13cbf06\") " pod="kube-system/storage-provisioner"
	Nov 20 21:22:07 embed-certs-714571 kubelet[1327]: I1120 21:22:07.543372    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.543346429 podStartE2EDuration="12.543346429s" podCreationTimestamp="2025-11-20 21:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:22:07.53104172 +0000 UTC m=+17.178962516" watchObservedRunningTime="2025-11-20 21:22:07.543346429 +0000 UTC m=+17.191267226"
	Nov 20 21:22:09 embed-certs-714571 kubelet[1327]: I1120 21:22:09.604887    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g47lf" podStartSLOduration=14.60485713 podStartE2EDuration="14.60485713s" podCreationTimestamp="2025-11-20 21:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:22:07.543503865 +0000 UTC m=+17.191424644" watchObservedRunningTime="2025-11-20 21:22:09.60485713 +0000 UTC m=+19.252777927"
	Nov 20 21:22:09 embed-certs-714571 kubelet[1327]: I1120 21:22:09.666443    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdcc\" (UniqueName: \"kubernetes.io/projected/2f0d580b-0733-4eca-994c-f26f9f207bcc-kube-api-access-pgdcc\") pod \"busybox\" (UID: \"2f0d580b-0733-4eca-994c-f26f9f207bcc\") " pod="default/busybox"
	Nov 20 21:22:12 embed-certs-714571 kubelet[1327]: I1120 21:22:12.546576    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.278295552 podStartE2EDuration="3.546554263s" podCreationTimestamp="2025-11-20 21:22:09 +0000 UTC" firstStartedPulling="2025-11-20 21:22:09.931090139 +0000 UTC m=+19.579010919" lastFinishedPulling="2025-11-20 21:22:12.199348837 +0000 UTC m=+21.847269630" observedRunningTime="2025-11-20 21:22:12.546382218 +0000 UTC m=+22.194303019" watchObservedRunningTime="2025-11-20 21:22:12.546554263 +0000 UTC m=+22.194475071"
	
	
	==> storage-provisioner [3fa9135ac95478f4778ce21956196f6631233b60605c3666c9252934e56e44ec] <==
	I1120 21:22:06.675506       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:22:06.691994       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:22:06.692058       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:22:06.699424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:06.708381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:22:06.708773       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:22:06.709549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-714571_cc19a7bb-0b5f-4d0a-88de-3bc376861f2c!
	I1120 21:22:06.708897       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"701650fb-022f-46cb-8ee0-f1d577e47078", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-714571_cc19a7bb-0b5f-4d0a-88de-3bc376861f2c became leader
	W1120 21:22:06.715616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:06.725385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:22:06.810095       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-714571_cc19a7bb-0b5f-4d0a-88de-3bc376861f2c!
	W1120 21:22:08.729183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:08.734771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:10.738412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:10.742871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:12.746501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:12.750823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:14.754684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:14.759116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:16.762764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:16.769036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:18.773143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:18.778392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:20.784139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:20.792436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-714571 -n embed-certs-714571
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-714571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (299.162168ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-454524 describe deploy/metrics-server -n kube-system: exit status 1 (71.396316ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-454524 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-454524
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-454524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b",
	        "Created": "2025-11-20T21:21:50.606943325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 553674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:21:50.645575829Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/hosts",
	        "LogPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b-json.log",
	        "Name": "/default-k8s-diff-port-454524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-454524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-454524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b",
	                "LowerDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-454524",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-454524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-454524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-454524",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-454524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4c40ec8e27051a3f4fde7ce43df4e9c191feed567069478e2582b0216b12665e",
	            "SandboxKey": "/var/run/docker/netns/4c40ec8e2705",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-454524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a91837c366fb15344a0e0b6f73e85038ca163d1eb2c31d15bcf6f3ca26f3d04",
	                    "EndpointID": "71ef3c20ccb518745caae770998d027c7d197f887247a8cb54aa45a70f2b45d7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "aa:af:26:9e:4d:f3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-454524",
	                        "c409d5fe70c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-454524 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-454524 logs -n 25: (1.081725039s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p old-k8s-version-936214 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:22:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:22:20.290510  560374 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:22:20.290874  560374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:20.290886  560374 out.go:374] Setting ErrFile to fd 2...
	I1120 21:22:20.290891  560374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:20.291187  560374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:22:20.291902  560374 out.go:368] Setting JSON to false
	I1120 21:22:20.293795  560374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14682,"bootTime":1763659058,"procs":395,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:22:20.293933  560374 start.go:143] virtualization: kvm guest
	I1120 21:22:20.295854  560374 out.go:179] * [no-preload-166874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:22:20.297627  560374 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:22:20.297687  560374 notify.go:221] Checking for updates...
	I1120 21:22:20.300297  560374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:22:20.301905  560374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:20.303160  560374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:22:20.304439  560374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:22:20.305754  560374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:22:20.307488  560374 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:20.308096  560374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:22:20.341764  560374 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:22:20.341967  560374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:20.431355  560374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-20 21:22:20.4131747 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:20.431522  560374 docker.go:319] overlay module found
	I1120 21:22:20.433863  560374 out.go:179] * Using the docker driver based on existing profile
	I1120 21:22:20.435192  560374 start.go:309] selected driver: docker
	I1120 21:22:20.435210  560374 start.go:930] validating driver "docker" against &{Name:no-preload-166874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-166874 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:20.435424  560374 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:22:20.435939  560374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:20.530343  560374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-20 21:22:20.514936861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:20.530714  560374 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:20.530752  560374 cni.go:84] Creating CNI manager for ""
	I1120 21:22:20.530794  560374 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:22:20.530842  560374 start.go:353] cluster config:
	{Name:no-preload-166874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-166874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:20.533487  560374 out.go:179] * Starting "no-preload-166874" primary control-plane node in "no-preload-166874" cluster
	I1120 21:22:20.534946  560374 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:22:20.536277  560374 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:22:20.537505  560374 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:22:20.537651  560374 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/config.json ...
	I1120 21:22:20.538048  560374 cache.go:107] acquiring lock: {Name:mk46fb3495ec41c903aa56f93cd3c7096c70894a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538144  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1120 21:22:20.538156  560374 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.512µs
	I1120 21:22:20.538173  560374 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1120 21:22:20.538191  560374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:22:20.538412  560374 cache.go:107] acquiring lock: {Name:mk9c89248562f20192938041b6bd6552216aa964 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538512  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1120 21:22:20.538524  560374 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 132.484µs
	I1120 21:22:20.538543  560374 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1120 21:22:20.538561  560374 cache.go:107] acquiring lock: {Name:mk4dfd93400a2f33ce65618d1a080ca4db0e0974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538621  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1120 21:22:20.538628  560374 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 69.571µs
	I1120 21:22:20.538639  560374 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1120 21:22:20.538662  560374 cache.go:107] acquiring lock: {Name:mk3ef65ab505e52a2e42eb0f311706e7fd70a6fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538470  560374 cache.go:107] acquiring lock: {Name:mk2be9ed58cdb4f1296d889b8e8116efe113a268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538702  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1120 21:22:20.538709  560374 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 59.519µs
	I1120 21:22:20.538716  560374 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1120 21:22:20.538730  560374 cache.go:107] acquiring lock: {Name:mk79a3ae4ccdbefded83366d4b297ed019c44b2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538752  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1120 21:22:20.538767  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1120 21:22:20.538763  560374 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 314.779µs
	I1120 21:22:20.538776  560374 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1120 21:22:20.538774  560374 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 46µs
	I1120 21:22:20.538783  560374 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1120 21:22:20.538642  560374 cache.go:107] acquiring lock: {Name:mkad9a1cd50f0d1a7cf1eafc36ed25a29a7a089f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538480  560374 cache.go:107] acquiring lock: {Name:mk6f1272bc3f934f52791858bd6b496d9b42fac4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.538952  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1120 21:22:20.538966  560374 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 498.876µs
	I1120 21:22:20.538976  560374 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1120 21:22:20.538989  560374 cache.go:115] /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1120 21:22:20.538999  560374 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 393.773µs
	I1120 21:22:20.539006  560374 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1120 21:22:20.539015  560374 cache.go:87] Successfully saved all images to host disk.
	I1120 21:22:20.569441  560374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:22:20.569476  560374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:22:20.569492  560374 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:22:20.569524  560374 start.go:360] acquireMachinesLock for no-preload-166874: {Name:mk4895953f03c47a99abaabec927ddc58a5c3034 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:20.569595  560374 start.go:364] duration metric: took 47.253µs to acquireMachinesLock for "no-preload-166874"
	I1120 21:22:20.569621  560374 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:22:20.569629  560374 fix.go:54] fixHost starting: 
	I1120 21:22:20.569958  560374 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:22:20.595669  560374 fix.go:112] recreateIfNeeded on no-preload-166874: state=Stopped err=<nil>
	W1120 21:22:20.595706  560374 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 21:22:21.264842  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:23.757543  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:21.669075  552911 node_ready.go:57] node "default-k8s-diff-port-454524" has "Ready":"False" status (will retry)
	I1120 21:22:23.666791  552911 node_ready.go:49] node "default-k8s-diff-port-454524" is "Ready"
	I1120 21:22:23.666831  552911 node_ready.go:38] duration metric: took 11.00363145s for node "default-k8s-diff-port-454524" to be "Ready" ...
	I1120 21:22:23.666856  552911 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:22:23.666919  552911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:22:23.684286  552911 api_server.go:72] duration metric: took 11.304488522s to wait for apiserver process to appear ...
	I1120 21:22:23.684318  552911 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:22:23.684343  552911 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:22:23.689726  552911 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 21:22:23.691152  552911 api_server.go:141] control plane version: v1.34.1
	I1120 21:22:23.691183  552911 api_server.go:131] duration metric: took 6.856242ms to wait for apiserver health ...
	I1120 21:22:23.691195  552911 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:22:23.694302  552911 system_pods.go:59] 8 kube-system pods found
	I1120 21:22:23.694348  552911 system_pods.go:61] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:23.694359  552911 system_pods.go:61] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running
	I1120 21:22:23.694372  552911 system_pods.go:61] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:22:23.694377  552911 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running
	I1120 21:22:23.694386  552911 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running
	I1120 21:22:23.694392  552911 system_pods.go:61] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:22:23.694400  552911 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running
	I1120 21:22:23.694408  552911 system_pods.go:61] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:23.694419  552911 system_pods.go:74] duration metric: took 3.216638ms to wait for pod list to return data ...
	I1120 21:22:23.694432  552911 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:22:23.696829  552911 default_sa.go:45] found service account: "default"
	I1120 21:22:23.696856  552911 default_sa.go:55] duration metric: took 2.417226ms for default service account to be created ...
	I1120 21:22:23.696865  552911 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:22:23.699544  552911 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:23.699578  552911 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:23.699587  552911 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running
	I1120 21:22:23.699596  552911 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:22:23.699601  552911 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running
	I1120 21:22:23.699607  552911 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running
	I1120 21:22:23.699619  552911 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:22:23.699624  552911 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running
	I1120 21:22:23.699664  552911 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:23.699703  552911 retry.go:31] will retry after 240.307167ms: missing components: kube-dns
	I1120 21:22:23.945004  552911 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:23.945038  552911 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:23.945044  552911 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running
	I1120 21:22:23.945051  552911 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:22:23.945055  552911 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running
	I1120 21:22:23.945059  552911 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running
	I1120 21:22:23.945062  552911 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:22:23.945065  552911 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running
	I1120 21:22:23.945070  552911 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:23.945088  552911 retry.go:31] will retry after 260.498925ms: missing components: kube-dns
	I1120 21:22:24.210829  552911 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:24.210872  552911 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:24.210880  552911 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running
	I1120 21:22:24.210892  552911 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:22:24.210897  552911 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running
	I1120 21:22:24.210903  552911 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running
	I1120 21:22:24.210908  552911 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:22:24.210914  552911 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running
	I1120 21:22:24.210921  552911 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:24.210956  552911 retry.go:31] will retry after 408.860312ms: missing components: kube-dns
	I1120 21:22:24.624033  552911 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:24.624073  552911 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:24.624083  552911 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running
	I1120 21:22:24.624091  552911 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:22:24.624098  552911 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running
	I1120 21:22:24.624104  552911 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running
	I1120 21:22:24.624109  552911 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:22:24.624116  552911 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running
	I1120 21:22:24.624125  552911 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:24.624146  552911 retry.go:31] will retry after 479.555989ms: missing components: kube-dns
	I1120 21:22:20.597509  560374 out.go:252] * Restarting existing docker container for "no-preload-166874" ...
	I1120 21:22:20.597607  560374 cli_runner.go:164] Run: docker start no-preload-166874
	I1120 21:22:20.962494  560374 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:22:20.989455  560374 kic.go:430] container "no-preload-166874" state is running.
	I1120 21:22:20.989908  560374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-166874
	I1120 21:22:21.017112  560374 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/config.json ...
	I1120 21:22:21.017435  560374 machine.go:94] provisionDockerMachine start ...
	I1120 21:22:21.017555  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:21.045396  560374 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:21.045714  560374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1120 21:22:21.045728  560374 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:22:21.046419  560374 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1120 21:22:24.191981  560374 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-166874
	
	I1120 21:22:24.192022  560374 ubuntu.go:182] provisioning hostname "no-preload-166874"
	I1120 21:22:24.192091  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:24.214853  560374 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:24.215162  560374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1120 21:22:24.215183  560374 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-166874 && echo "no-preload-166874" | sudo tee /etc/hostname
	I1120 21:22:24.365120  560374 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-166874
	
	I1120 21:22:24.365224  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:24.386296  560374 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:24.386589  560374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1120 21:22:24.386610  560374 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-166874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-166874/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-166874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:22:24.524621  560374 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:22:24.524658  560374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:22:24.524720  560374 ubuntu.go:190] setting up certificates
	I1120 21:22:24.524737  560374 provision.go:84] configureAuth start
	I1120 21:22:24.524797  560374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-166874
	I1120 21:22:24.545183  560374 provision.go:143] copyHostCerts
	I1120 21:22:24.545292  560374 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:22:24.545314  560374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:22:24.545400  560374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:22:24.545542  560374 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:22:24.545558  560374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:22:24.545599  560374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:22:24.545691  560374 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:22:24.545702  560374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:22:24.545739  560374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:22:24.545813  560374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.no-preload-166874 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-166874]
	I1120 21:22:24.727158  560374 provision.go:177] copyRemoteCerts
	I1120 21:22:24.727259  560374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:22:24.727322  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:24.749959  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:24.854806  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:22:24.874937  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:22:24.894013  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1120 21:22:24.912930  560374 provision.go:87] duration metric: took 388.168483ms to configureAuth
	I1120 21:22:24.912958  560374 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:22:24.913153  560374 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:24.913303  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:24.932097  560374 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:24.932392  560374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1120 21:22:24.932420  560374 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:22:25.108530  552911 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:25.108563  552911 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running
	I1120 21:22:25.108569  552911 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running
	I1120 21:22:25.108633  552911 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:22:25.108637  552911 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running
	I1120 21:22:25.108641  552911 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running
	I1120 21:22:25.108644  552911 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:22:25.108648  552911 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running
	I1120 21:22:25.108651  552911 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running
	I1120 21:22:25.108660  552911 system_pods.go:126] duration metric: took 1.411789162s to wait for k8s-apps to be running ...
	I1120 21:22:25.108671  552911 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:22:25.108771  552911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:25.124624  552911 system_svc.go:56] duration metric: took 15.940502ms WaitForService to wait for kubelet
	I1120 21:22:25.124660  552911 kubeadm.go:587] duration metric: took 12.7448702s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:25.124681  552911 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:22:25.127889  552911 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:22:25.127918  552911 node_conditions.go:123] node cpu capacity is 8
	I1120 21:22:25.127957  552911 node_conditions.go:105] duration metric: took 3.271447ms to run NodePressure ...
	I1120 21:22:25.127969  552911 start.go:242] waiting for startup goroutines ...
	I1120 21:22:25.127976  552911 start.go:247] waiting for cluster config update ...
	I1120 21:22:25.127987  552911 start.go:256] writing updated cluster config ...
	I1120 21:22:25.128278  552911 ssh_runner.go:195] Run: rm -f paused
	I1120 21:22:25.132341  552911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:22:25.135948  552911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.140643  552911 pod_ready.go:94] pod "coredns-66bc5c9577-zkl9z" is "Ready"
	I1120 21:22:25.140670  552911 pod_ready.go:86] duration metric: took 4.699261ms for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.142965  552911 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.147335  552911 pod_ready.go:94] pod "etcd-default-k8s-diff-port-454524" is "Ready"
	I1120 21:22:25.147360  552911 pod_ready.go:86] duration metric: took 4.368297ms for pod "etcd-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.149614  552911 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.153705  552911 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-454524" is "Ready"
	I1120 21:22:25.153730  552911 pod_ready.go:86] duration metric: took 4.0968ms for pod "kube-apiserver-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.155608  552911 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.537665  552911 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-454524" is "Ready"
	I1120 21:22:25.537710  552911 pod_ready.go:86] duration metric: took 382.080647ms for pod "kube-controller-manager-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:25.737052  552911 pod_ready.go:83] waiting for pod "kube-proxy-fpnmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:26.137059  552911 pod_ready.go:94] pod "kube-proxy-fpnmp" is "Ready"
	I1120 21:22:26.137092  552911 pod_ready.go:86] duration metric: took 400.011951ms for pod "kube-proxy-fpnmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:26.337964  552911 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:26.737374  552911 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-454524" is "Ready"
	I1120 21:22:26.737401  552911 pod_ready.go:86] duration metric: took 399.395222ms for pod "kube-scheduler-default-k8s-diff-port-454524" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:22:26.737412  552911 pod_ready.go:40] duration metric: took 1.605038625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:22:26.786341  552911 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:22:26.788466  552911 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-454524" cluster and "default" namespace by default
	I1120 21:22:25.622191  560374 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:22:25.622243  560374 machine.go:97] duration metric: took 4.604757097s to provisionDockerMachine
	I1120 21:22:25.622260  560374 start.go:293] postStartSetup for "no-preload-166874" (driver="docker")
	I1120 21:22:25.622275  560374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:22:25.622370  560374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:22:25.622431  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:25.642716  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:25.742420  560374 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:22:25.746290  560374 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:22:25.746315  560374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:22:25.746325  560374 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:22:25.746375  560374 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:22:25.746445  560374 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:22:25.746534  560374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:22:25.756192  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:22:25.775270  560374 start.go:296] duration metric: took 152.994021ms for postStartSetup
	I1120 21:22:25.775348  560374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:22:25.775389  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:25.793497  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:25.888011  560374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:22:25.893163  560374 fix.go:56] duration metric: took 5.32352538s for fixHost
	I1120 21:22:25.893189  560374 start.go:83] releasing machines lock for "no-preload-166874", held for 5.323578222s
	I1120 21:22:25.893285  560374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-166874
	I1120 21:22:25.911551  560374 ssh_runner.go:195] Run: cat /version.json
	I1120 21:22:25.911611  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:25.911643  560374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:22:25.911731  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:25.931873  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:25.932168  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:26.088074  560374 ssh_runner.go:195] Run: systemctl --version
	I1120 21:22:26.095811  560374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:22:26.133535  560374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:22:26.139164  560374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:22:26.139250  560374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:22:26.148484  560374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:22:26.148508  560374 start.go:496] detecting cgroup driver to use...
	I1120 21:22:26.148543  560374 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:22:26.148586  560374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:22:26.163354  560374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:22:26.176850  560374 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:22:26.176916  560374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:22:26.191680  560374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:22:26.205623  560374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:22:26.292496  560374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:22:26.377163  560374 docker.go:234] disabling docker service ...
	I1120 21:22:26.377272  560374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:22:26.392050  560374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:22:26.404772  560374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:22:26.488188  560374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:22:26.573490  560374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:22:26.586918  560374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:22:26.601707  560374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:22:26.601772  560374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.610954  560374 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:22:26.611024  560374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.620828  560374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.630332  560374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.639751  560374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:22:26.648328  560374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.657846  560374 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.666658  560374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:22:26.676146  560374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:22:26.683802  560374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:22:26.691677  560374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:26.786145  560374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:22:26.952045  560374 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:22:26.952139  560374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:22:26.957651  560374 start.go:564] Will wait 60s for crictl version
	I1120 21:22:26.957729  560374 ssh_runner.go:195] Run: which crictl
	I1120 21:22:26.962196  560374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:22:26.988717  560374 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:22:26.988806  560374 ssh_runner.go:195] Run: crio --version
	I1120 21:22:27.018773  560374 ssh_runner.go:195] Run: crio --version
	I1120 21:22:27.050446  560374 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:22:27.051645  560374 cli_runner.go:164] Run: docker network inspect no-preload-166874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:22:27.071264  560374 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1120 21:22:27.075818  560374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:22:27.086936  560374 kubeadm.go:884] updating cluster {Name:no-preload-166874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-166874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:22:27.087060  560374 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:22:27.087094  560374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:22:27.124749  560374 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:22:27.124780  560374 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:22:27.124790  560374 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1120 21:22:27.124942  560374 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-166874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-166874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:22:27.125031  560374 ssh_runner.go:195] Run: crio config
	I1120 21:22:27.178037  560374 cni.go:84] Creating CNI manager for ""
	I1120 21:22:27.178061  560374 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:22:27.178092  560374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1120 21:22:27.178123  560374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-166874 NodeName:no-preload-166874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:22:27.178300  560374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-166874"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:22:27.178390  560374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:22:27.187038  560374 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:22:27.187116  560374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:22:27.194907  560374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1120 21:22:27.208942  560374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:22:27.222301  560374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1120 21:22:27.236728  560374 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:22:27.240793  560374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:22:27.251036  560374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:27.337347  560374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:22:27.359665  560374 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874 for IP: 192.168.94.2
	I1120 21:22:27.359695  560374 certs.go:195] generating shared ca certs ...
	I1120 21:22:27.359719  560374 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:27.359863  560374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:22:27.359918  560374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:22:27.359933  560374 certs.go:257] generating profile certs ...
	I1120 21:22:27.360038  560374 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/client.key
	I1120 21:22:27.360112  560374 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/apiserver.key.84d742a0
	I1120 21:22:27.360167  560374 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/proxy-client.key
	I1120 21:22:27.360309  560374 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:22:27.360349  560374 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:22:27.360364  560374 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:22:27.360400  560374 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:22:27.360432  560374 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:22:27.360466  560374 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:22:27.360525  560374 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:22:27.361260  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:22:27.380780  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:22:27.400961  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:22:27.422588  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:22:27.448093  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:22:27.468696  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:22:27.488211  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:22:27.508029  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/no-preload-166874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1120 21:22:27.528719  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:22:27.548243  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:22:27.569287  560374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:22:27.592263  560374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:22:27.607798  560374 ssh_runner.go:195] Run: openssl version
	I1120 21:22:27.615491  560374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:22:27.625003  560374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:22:27.634096  560374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:22:27.638175  560374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:22:27.638268  560374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:22:27.683021  560374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:22:27.691277  560374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:27.698791  560374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:22:27.705983  560374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:27.709630  560374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:27.709674  560374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:22:27.745091  560374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:22:27.753670  560374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:22:27.761664  560374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:22:27.769575  560374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:22:27.773884  560374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:22:27.773940  560374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:22:27.808900  560374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:22:27.817066  560374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:22:27.821321  560374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:22:27.856771  560374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:22:27.902193  560374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:22:27.945381  560374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:22:27.996575  560374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:22:28.057254  560374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:22:28.096526  560374 kubeadm.go:401] StartCluster: {Name:no-preload-166874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-166874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:28.096641  560374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:22:28.096727  560374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:22:28.133496  560374 cri.go:89] found id: "61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab"
	I1120 21:22:28.133526  560374 cri.go:89] found id: "fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87"
	I1120 21:22:28.133534  560374 cri.go:89] found id: "4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605"
	I1120 21:22:28.133539  560374 cri.go:89] found id: "e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0"
	I1120 21:22:28.133542  560374 cri.go:89] found id: ""
	I1120 21:22:28.133595  560374 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:22:28.146370  560374 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:28Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:22:28.146452  560374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:22:28.155591  560374 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:22:28.155626  560374 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:22:28.155683  560374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:22:28.163584  560374 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:22:28.164929  560374 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-166874" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:28.165898  560374 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-166874" cluster setting kubeconfig missing "no-preload-166874" context setting]
	I1120 21:22:28.167235  560374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:28.169107  560374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:22:28.177549  560374 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1120 21:22:28.177584  560374 kubeadm.go:602] duration metric: took 21.950467ms to restartPrimaryControlPlane
	I1120 21:22:28.177595  560374 kubeadm.go:403] duration metric: took 81.085592ms to StartCluster
	I1120 21:22:28.177614  560374 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:28.177687  560374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:28.180332  560374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:22:28.180619  560374 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:22:28.180683  560374 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:22:28.180808  560374 addons.go:70] Setting storage-provisioner=true in profile "no-preload-166874"
	I1120 21:22:28.180828  560374 addons.go:239] Setting addon storage-provisioner=true in "no-preload-166874"
	W1120 21:22:28.180836  560374 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:22:28.180838  560374 addons.go:70] Setting dashboard=true in profile "no-preload-166874"
	I1120 21:22:28.180855  560374 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:28.180864  560374 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:22:28.180868  560374 addons.go:70] Setting default-storageclass=true in profile "no-preload-166874"
	I1120 21:22:28.181008  560374 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-166874"
	I1120 21:22:28.181320  560374 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:22:28.180868  560374 addons.go:239] Setting addon dashboard=true in "no-preload-166874"
	W1120 21:22:28.181429  560374 addons.go:248] addon dashboard should already be in state true
	I1120 21:22:28.181462  560374 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:22:28.181472  560374 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:22:28.181931  560374 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:22:28.183441  560374 out.go:179] * Verifying Kubernetes components...
	I1120 21:22:28.184774  560374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:22:28.209167  560374 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:22:28.209233  560374 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 21:22:28.210761  560374 addons.go:239] Setting addon default-storageclass=true in "no-preload-166874"
	W1120 21:22:28.210792  560374 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:22:28.210826  560374 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:22:28.210874  560374 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:22:28.210905  560374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:22:28.210972  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:28.211259  560374 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:22:28.212022  560374 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1120 21:22:26.257310  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:28.258690  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	I1120 21:22:28.213098  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 21:22:28.213124  560374 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 21:22:28.213187  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:28.237976  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:28.242191  560374 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:22:28.242293  560374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:22:28.242381  560374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:22:28.248251  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:28.269797  560374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:22:28.337035  560374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:22:28.351444  560374 node_ready.go:35] waiting up to 6m0s for node "no-preload-166874" to be "Ready" ...
	I1120 21:22:28.363404  560374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:22:28.367700  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 21:22:28.367730  560374 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 21:22:28.384350  560374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:22:28.385045  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 21:22:28.385074  560374 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 21:22:28.401960  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 21:22:28.401989  560374 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 21:22:28.422928  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 21:22:28.422968  560374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 21:22:28.439859  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 21:22:28.439890  560374 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 21:22:28.456179  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 21:22:28.456232  560374 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 21:22:28.472473  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 21:22:28.472501  560374 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 21:22:28.487356  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 21:22:28.487392  560374 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 21:22:28.501902  560374 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:22:28.501941  560374 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 21:22:28.516615  560374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:22:29.764682  560374 node_ready.go:49] node "no-preload-166874" is "Ready"
	I1120 21:22:29.764720  560374 node_ready.go:38] duration metric: took 1.413246794s for node "no-preload-166874" to be "Ready" ...
	I1120 21:22:29.764737  560374 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:22:29.764803  560374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:22:30.321652  560374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.958210861s)
	I1120 21:22:30.321724  560374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.937337079s)
	I1120 21:22:30.321781  560374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.805134092s)
	I1120 21:22:30.321837  560374 api_server.go:72] duration metric: took 2.141181923s to wait for apiserver process to appear ...
	I1120 21:22:30.321859  560374 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:22:30.321884  560374 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 21:22:30.323306  560374 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-166874 addons enable metrics-server
	
	I1120 21:22:30.327253  560374 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:22:30.327277  560374 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:22:30.331461  560374 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1120 21:22:30.761109  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	W1120 21:22:33.257457  555241 pod_ready.go:104] pod "coredns-5dd5756b68-5t2cr" is not "Ready", error: <nil>
	I1120 21:22:30.332721  560374 addons.go:515] duration metric: took 2.152039718s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:22:30.822880  560374 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 21:22:30.829135  560374 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:22:30.829174  560374 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:22:31.322380  560374 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1120 21:22:31.326733  560374 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1120 21:22:31.327982  560374 api_server.go:141] control plane version: v1.34.1
	I1120 21:22:31.328014  560374 api_server.go:131] duration metric: took 1.006147371s to wait for apiserver health ...
	I1120 21:22:31.328025  560374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:22:31.331842  560374 system_pods.go:59] 8 kube-system pods found
	I1120 21:22:31.331889  560374 system_pods.go:61] "coredns-66bc5c9577-knwbq" [5c4bc14b-7cfc-45bf-9b6f-521c533cfe32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:31.331900  560374 system_pods.go:61] "etcd-no-preload-166874" [a02c330e-1cd6-4f61-87db-027e90605904] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:22:31.331908  560374 system_pods.go:61] "kindnet-w6hk4" [89c8ea21-6ae6-4fcc-b291-b1c32f999b92] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:22:31.331917  560374 system_pods.go:61] "kube-apiserver-no-preload-166874" [2bb777af-aeb4-4cac-a284-796384c40e7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:22:31.331926  560374 system_pods.go:61] "kube-controller-manager-no-preload-166874" [9738f3e6-c997-48d6-9095-bdc9fb156a0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:22:31.331938  560374 system_pods.go:61] "kube-proxy-8mtnk" [3b9f397e-01ab-4831-819f-0df8db892b7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:22:31.331945  560374 system_pods.go:61] "kube-scheduler-no-preload-166874" [e26a430d-fda0-4f24-beb7-21f373717d3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:22:31.331951  560374 system_pods.go:61] "storage-provisioner" [d362d663-578d-46ae-9fe2-b28ab1b00f5c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:31.331961  560374 system_pods.go:74] duration metric: took 3.929342ms to wait for pod list to return data ...
	I1120 21:22:31.331971  560374 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:22:31.334780  560374 default_sa.go:45] found service account: "default"
	I1120 21:22:31.334802  560374 default_sa.go:55] duration metric: took 2.824844ms for default service account to be created ...
	I1120 21:22:31.334810  560374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:22:31.337557  560374 system_pods.go:86] 8 kube-system pods found
	I1120 21:22:31.337583  560374 system_pods.go:89] "coredns-66bc5c9577-knwbq" [5c4bc14b-7cfc-45bf-9b6f-521c533cfe32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:22:31.337591  560374 system_pods.go:89] "etcd-no-preload-166874" [a02c330e-1cd6-4f61-87db-027e90605904] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:22:31.337603  560374 system_pods.go:89] "kindnet-w6hk4" [89c8ea21-6ae6-4fcc-b291-b1c32f999b92] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:22:31.337613  560374 system_pods.go:89] "kube-apiserver-no-preload-166874" [2bb777af-aeb4-4cac-a284-796384c40e7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:22:31.337632  560374 system_pods.go:89] "kube-controller-manager-no-preload-166874" [9738f3e6-c997-48d6-9095-bdc9fb156a0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:22:31.337641  560374 system_pods.go:89] "kube-proxy-8mtnk" [3b9f397e-01ab-4831-819f-0df8db892b7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:22:31.337646  560374 system_pods.go:89] "kube-scheduler-no-preload-166874" [e26a430d-fda0-4f24-beb7-21f373717d3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:22:31.337654  560374 system_pods.go:89] "storage-provisioner" [d362d663-578d-46ae-9fe2-b28ab1b00f5c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:22:31.337661  560374 system_pods.go:126] duration metric: took 2.845654ms to wait for k8s-apps to be running ...
	I1120 21:22:31.337670  560374 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:22:31.337718  560374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:31.351285  560374 system_svc.go:56] duration metric: took 13.606012ms WaitForService to wait for kubelet
	I1120 21:22:31.351312  560374 kubeadm.go:587] duration metric: took 3.170659431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:31.351336  560374 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:22:31.354183  560374 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:22:31.354207  560374 node_conditions.go:123] node cpu capacity is 8
	I1120 21:22:31.354253  560374 node_conditions.go:105] duration metric: took 2.910821ms to run NodePressure ...
	I1120 21:22:31.354268  560374 start.go:242] waiting for startup goroutines ...
	I1120 21:22:31.354278  560374 start.go:247] waiting for cluster config update ...
	I1120 21:22:31.354292  560374 start.go:256] writing updated cluster config ...
	I1120 21:22:31.354535  560374 ssh_runner.go:195] Run: rm -f paused
	I1120 21:22:31.358482  560374 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:22:31.361619  560374 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-knwbq" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:22:33.366842  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:24 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:24.041388816Z" level=info msg="Starting container: 3c0b4ad5b363165faac26ae4d0ea5a72ba582b2bbfe39847a2708fd303933fcb" id=809c89f2-c7e3-4553-8fae-dd79595ef064 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:24 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:24.043922283Z" level=info msg="Started container" PID=1853 containerID=3c0b4ad5b363165faac26ae4d0ea5a72ba582b2bbfe39847a2708fd303933fcb description=kube-system/coredns-66bc5c9577-zkl9z/coredns id=809c89f2-c7e3-4553-8fae-dd79595ef064 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4fbd3032d32e34ff78543f58f512405fb212b4dde179f87066e79abc9c07a3e2
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.263942946Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ebeb231a-9e0f-4fbe-9df3-aae8021229fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.264032213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.269291122Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:21b222895210bb1b44f96140ad220f9887f084f1f0f381954da45229aec0f9fd UID:02867973-6a01-4a1a-bd7e-194be3d350a6 NetNS:/var/run/netns/8d4dd9d1-6b58-4484-8929-4a5800a301d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aaf0}] Aliases:map[]}"
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.269318504Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.283098474Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:21b222895210bb1b44f96140ad220f9887f084f1f0f381954da45229aec0f9fd UID:02867973-6a01-4a1a-bd7e-194be3d350a6 NetNS:/var/run/netns/8d4dd9d1-6b58-4484-8929-4a5800a301d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aaf0}] Aliases:map[]}"
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.283306335Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.284492947Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.285730255Z" level=info msg="Ran pod sandbox 21b222895210bb1b44f96140ad220f9887f084f1f0f381954da45229aec0f9fd with infra container: default/busybox/POD" id=ebeb231a-9e0f-4fbe-9df3-aae8021229fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.287304016Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7b20ded3-1a43-475d-aec4-f154a694c2d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.287462248Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7b20ded3-1a43-475d-aec4-f154a694c2d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.287515915Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7b20ded3-1a43-475d-aec4-f154a694c2d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.288393118Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a9ca3909-cabe-42c4-9e0c-5180b055c282 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:22:27 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:27.290104876Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.395197258Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a9ca3909-cabe-42c4-9e0c-5180b055c282 name=/runtime.v1.ImageService/PullImage
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.396102724Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ec721b4c-93c4-477c-8cb4-0909d1f80a3b name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.3975576Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d241f080-7ec8-4d90-9d74-49b5770e8c4a name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.401007263Z" level=info msg="Creating container: default/busybox/busybox" id=be005f40-4570-4d80-b575-7e537ab648c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.401129255Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.404784122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.405204392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.444383589Z" level=info msg="Created container 1c9687224736bcff3cf4f35623d0d44cbb274cc8897b941fa868f8b2d984591b: default/busybox/busybox" id=be005f40-4570-4d80-b575-7e537ab648c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.445147543Z" level=info msg="Starting container: 1c9687224736bcff3cf4f35623d0d44cbb274cc8897b941fa868f8b2d984591b" id=6e2ed2a2-8dc7-4c93-a47a-55ea73a77bbe name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:29 default-k8s-diff-port-454524 crio[777]: time="2025-11-20T21:22:29.446899866Z" level=info msg="Started container" PID=1925 containerID=1c9687224736bcff3cf4f35623d0d44cbb274cc8897b941fa868f8b2d984591b description=default/busybox/busybox id=6e2ed2a2-8dc7-4c93-a47a-55ea73a77bbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=21b222895210bb1b44f96140ad220f9887f084f1f0f381954da45229aec0f9fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	1c9687224736b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   21b222895210b       busybox                                                default
	3c0b4ad5b3631       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   4fbd3032d32e3       coredns-66bc5c9577-zkl9z                               kube-system
	250f675f25216       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   bf9d7482f053a       storage-provisioner                                    kube-system
	462fe705e1e38       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   49eb0012c117a       kindnet-clzlq                                          kube-system
	2e5bba96183aa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   af9123c9cd08c       kube-proxy-fpnmp                                       kube-system
	69b5fe722e746       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   f1456bf11a73e       kube-scheduler-default-k8s-diff-port-454524            kube-system
	c56e0eba7966a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   0603998c24b4a       kube-apiserver-default-k8s-diff-port-454524            kube-system
	1540822150fb8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   8cb91be50185f       etcd-default-k8s-diff-port-454524                      kube-system
	75c04f4ff9025       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   3724bab7fbe4a       kube-controller-manager-default-k8s-diff-port-454524   kube-system
	
	
	==> coredns [3c0b4ad5b363165faac26ae4d0ea5a72ba582b2bbfe39847a2708fd303933fcb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58924 - 30895 "HINFO IN 8058395818195301055.5291016048668370935. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060161171s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-454524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-454524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-454524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-454524
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:22:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:22:23 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:22:23 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:22:23 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:22:23 +0000   Thu, 20 Nov 2025 21:22:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-454524
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                5a173afd-5240-460c-a507-61495be2fab4
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-zkl9z                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-454524                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-clzlq                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-454524             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-454524    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-fpnmp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-454524             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-454524 event: Registered Node default-k8s-diff-port-454524 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-454524 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [1540822150fb8bb488cbf4d0e08e7a88c98711501525c42fb5f8c9c35262bc8e] <==
	{"level":"warn","ts":"2025-11-20T21:22:03.507105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.515195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.534876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.546897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.555742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.563559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.571469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.578599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.587108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.595616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.602995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.611431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.618937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.625937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.634710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.642546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.651057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.661540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.669802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.685625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.688956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.708649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.715514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.723615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:03.785768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:22:36 up  4:04,  0 user,  load average: 5.08, 4.82, 2.92
	Linux default-k8s-diff-port-454524 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [462fe705e1e387aa21a44678095cd59a6df1e91721d44a23449d17e5526c574d] <==
	I1120 21:22:13.036630       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:22:13.036912       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1120 21:22:13.037104       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:13.037124       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:13.037153       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:13.289952       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:13.290009       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:13.290022       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:13.290181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:22:13.631543       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:13.631575       1 metrics.go:72] Registering metrics
	I1120 21:22:13.631703       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:23.290353       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:22:23.290410       1 main.go:301] handling current node
	I1120 21:22:33.293102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:22:33.293153       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c56e0eba7966a864ea184b35ac8bb3cb9812e702df560a1a2040b361bb86ad41] <==
	I1120 21:22:04.437110       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:22:04.437207       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1120 21:22:04.443801       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1120 21:22:04.449569       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:22:04.449654       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:22:04.449681       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:22:04.542716       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:05.233876       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:22:05.239564       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:22:05.239584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:05.814425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:05.852768       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:05.941777       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:22:05.948977       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1120 21:22:05.950401       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:22:05.956579       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:22:06.268148       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:22:06.840715       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:22:06.849806       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:22:06.858572       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:22:11.868912       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:22:12.073753       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:22:12.078510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:22:12.319471       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1120 21:22:35.073783       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:56972: use of closed network connection
	
	
	==> kube-controller-manager [75c04f4ff9025b3bd54e61ec1331ae16fa9c2b7c10040747f24fd56445667497] <==
	I1120 21:22:11.266067       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:22:11.266087       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:22:11.266108       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:22:11.266147       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:22:11.266189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:22:11.266206       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:22:11.266204       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:22:11.266246       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:22:11.266207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:22:11.266207       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:22:11.266520       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:22:11.266810       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:22:11.267146       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:22:11.269756       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:22:11.270810       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:11.270814       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:22:11.270907       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:22:11.270939       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:22:11.270944       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:22:11.270949       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:22:11.271971       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:11.278762       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:22:11.279258       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-454524" podCIDRs=["10.244.0.0/24"]
	I1120 21:22:11.283095       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:22:26.217144       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2e5bba96183aafc47b46d3c2c3a22b7b1858c821a70de5d874efb61ee0275f2b] <==
	I1120 21:22:12.909964       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:22:12.973146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:22:13.073363       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:22:13.073410       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:22:13.073537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:22:13.094326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:13.094385       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:22:13.100806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:22:13.101265       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:22:13.101307       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:13.102995       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:22:13.103098       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:22:13.103132       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:22:13.103137       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:22:13.103036       1 config.go:200] "Starting service config controller"
	I1120 21:22:13.103150       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:22:13.103331       1 config.go:309] "Starting node config controller"
	I1120 21:22:13.103353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:22:13.204245       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:22:13.204277       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:22:13.204258       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:22:13.204257       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [69b5fe722e7464d27e28befd9bf720019423c14a5364f8e44559d0d069f40393] <==
	E1120 21:22:04.350211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:22:04.350285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:22:04.350400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:22:04.350463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:22:04.352062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:22:04.352151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:22:04.352265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:22:04.352263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:22:04.352263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:22:04.352352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:22:04.352374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:22:04.352543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:22:04.352771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:22:04.352906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:22:05.162539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:22:05.194637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:22:05.194672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:22:05.284089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:22:05.454925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:22:05.466193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:22:05.467118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:22:05.525819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:22:05.602588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:22:05.737868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1120 21:22:08.239207       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.341977    1324 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912320    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22ef496e-f864-423f-9af3-54490ba5e8fc-kube-proxy\") pod \"kube-proxy-fpnmp\" (UID: \"22ef496e-f864-423f-9af3-54490ba5e8fc\") " pod="kube-system/kube-proxy-fpnmp"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912378    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdc96c97-76df-4b3e-ac9a-4bda9a760322-lib-modules\") pod \"kindnet-clzlq\" (UID: \"bdc96c97-76df-4b3e-ac9a-4bda9a760322\") " pod="kube-system/kindnet-clzlq"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912404    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdc96c97-76df-4b3e-ac9a-4bda9a760322-xtables-lock\") pod \"kindnet-clzlq\" (UID: \"bdc96c97-76df-4b3e-ac9a-4bda9a760322\") " pod="kube-system/kindnet-clzlq"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912431    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b746c\" (UniqueName: \"kubernetes.io/projected/bdc96c97-76df-4b3e-ac9a-4bda9a760322-kube-api-access-b746c\") pod \"kindnet-clzlq\" (UID: \"bdc96c97-76df-4b3e-ac9a-4bda9a760322\") " pod="kube-system/kindnet-clzlq"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912467    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22ef496e-f864-423f-9af3-54490ba5e8fc-xtables-lock\") pod \"kube-proxy-fpnmp\" (UID: \"22ef496e-f864-423f-9af3-54490ba5e8fc\") " pod="kube-system/kube-proxy-fpnmp"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912490    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7c5\" (UniqueName: \"kubernetes.io/projected/22ef496e-f864-423f-9af3-54490ba5e8fc-kube-api-access-cn7c5\") pod \"kube-proxy-fpnmp\" (UID: \"22ef496e-f864-423f-9af3-54490ba5e8fc\") " pod="kube-system/kube-proxy-fpnmp"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912515    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bdc96c97-76df-4b3e-ac9a-4bda9a760322-cni-cfg\") pod \"kindnet-clzlq\" (UID: \"bdc96c97-76df-4b3e-ac9a-4bda9a760322\") " pod="kube-system/kindnet-clzlq"
	Nov 20 21:22:11 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:11.912540    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22ef496e-f864-423f-9af3-54490ba5e8fc-lib-modules\") pod \"kube-proxy-fpnmp\" (UID: \"22ef496e-f864-423f-9af3-54490ba5e8fc\") " pod="kube-system/kube-proxy-fpnmp"
	Nov 20 21:22:12 default-k8s-diff-port-454524 kubelet[1324]: E1120 21:22:12.020569    1324 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 20 21:22:12 default-k8s-diff-port-454524 kubelet[1324]: E1120 21:22:12.020603    1324 projected.go:196] Error preparing data for projected volume kube-api-access-b746c for pod kube-system/kindnet-clzlq: configmap "kube-root-ca.crt" not found
	Nov 20 21:22:12 default-k8s-diff-port-454524 kubelet[1324]: E1120 21:22:12.020690    1324 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdc96c97-76df-4b3e-ac9a-4bda9a760322-kube-api-access-b746c podName:bdc96c97-76df-4b3e-ac9a-4bda9a760322 nodeName:}" failed. No retries permitted until 2025-11-20 21:22:12.520659015 +0000 UTC m=+5.927533326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b746c" (UniqueName: "kubernetes.io/projected/bdc96c97-76df-4b3e-ac9a-4bda9a760322-kube-api-access-b746c") pod "kindnet-clzlq" (UID: "bdc96c97-76df-4b3e-ac9a-4bda9a760322") : configmap "kube-root-ca.crt" not found
	Nov 20 21:22:12 default-k8s-diff-port-454524 kubelet[1324]: E1120 21:22:12.020902    1324 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 20 21:22:12 default-k8s-diff-port-454524 kubelet[1324]: E1120 21:22:12.020938    1324 projected.go:196] Error preparing data for projected volume kube-api-access-cn7c5 for pod kube-system/kube-proxy-fpnmp: configmap "kube-root-ca.crt" not found
	Nov 20 21:22:12 default-k8s-diff-port-454524 kubelet[1324]: E1120 21:22:12.021012    1324 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22ef496e-f864-423f-9af3-54490ba5e8fc-kube-api-access-cn7c5 podName:22ef496e-f864-423f-9af3-54490ba5e8fc nodeName:}" failed. No retries permitted until 2025-11-20 21:22:12.520988119 +0000 UTC m=+5.927862423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cn7c5" (UniqueName: "kubernetes.io/projected/22ef496e-f864-423f-9af3-54490ba5e8fc-kube-api-access-cn7c5") pod "kube-proxy-fpnmp" (UID: "22ef496e-f864-423f-9af3-54490ba5e8fc") : configmap "kube-root-ca.crt" not found
	Nov 20 21:22:13 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:13.766313    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-clzlq" podStartSLOduration=2.7662853480000003 podStartE2EDuration="2.766285348s" podCreationTimestamp="2025-11-20 21:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:22:13.766076495 +0000 UTC m=+7.172950805" watchObservedRunningTime="2025-11-20 21:22:13.766285348 +0000 UTC m=+7.173159639"
	Nov 20 21:22:18 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:18.666804    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fpnmp" podStartSLOduration=7.666770604 podStartE2EDuration="7.666770604s" podCreationTimestamp="2025-11-20 21:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:22:13.785636985 +0000 UTC m=+7.192511298" watchObservedRunningTime="2025-11-20 21:22:18.666770604 +0000 UTC m=+12.073644916"
	Nov 20 21:22:23 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:23.636358    1324 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 20 21:22:23 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:23.696873    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bc9ffafb-037f-41fa-b27e-75d8ee4aff49-tmp\") pod \"storage-provisioner\" (UID: \"bc9ffafb-037f-41fa-b27e-75d8ee4aff49\") " pod="kube-system/storage-provisioner"
	Nov 20 21:22:23 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:23.696909    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2vt\" (UniqueName: \"kubernetes.io/projected/bc9ffafb-037f-41fa-b27e-75d8ee4aff49-kube-api-access-jg2vt\") pod \"storage-provisioner\" (UID: \"bc9ffafb-037f-41fa-b27e-75d8ee4aff49\") " pod="kube-system/storage-provisioner"
	Nov 20 21:22:23 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:23.696930    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9d943a5-d29a-402e-ad52-29d36ed22d01-config-volume\") pod \"coredns-66bc5c9577-zkl9z\" (UID: \"f9d943a5-d29a-402e-ad52-29d36ed22d01\") " pod="kube-system/coredns-66bc5c9577-zkl9z"
	Nov 20 21:22:23 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:23.696958    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbpm\" (UniqueName: \"kubernetes.io/projected/f9d943a5-d29a-402e-ad52-29d36ed22d01-kube-api-access-mxbpm\") pod \"coredns-66bc5c9577-zkl9z\" (UID: \"f9d943a5-d29a-402e-ad52-29d36ed22d01\") " pod="kube-system/coredns-66bc5c9577-zkl9z"
	Nov 20 21:22:24 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:24.806647    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.806623077 podStartE2EDuration="12.806623077s" podCreationTimestamp="2025-11-20 21:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:22:24.795401612 +0000 UTC m=+18.202275923" watchObservedRunningTime="2025-11-20 21:22:24.806623077 +0000 UTC m=+18.213497390"
	Nov 20 21:22:26 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:26.956870    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zkl9z" podStartSLOduration=14.95684124 podStartE2EDuration="14.95684124s" podCreationTimestamp="2025-11-20 21:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:22:24.80682099 +0000 UTC m=+18.213695283" watchObservedRunningTime="2025-11-20 21:22:26.95684124 +0000 UTC m=+20.363715551"
	Nov 20 21:22:27 default-k8s-diff-port-454524 kubelet[1324]: I1120 21:22:27.014403    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lpq\" (UniqueName: \"kubernetes.io/projected/02867973-6a01-4a1a-bd7e-194be3d350a6-kube-api-access-v7lpq\") pod \"busybox\" (UID: \"02867973-6a01-4a1a-bd7e-194be3d350a6\") " pod="default/busybox"
	
	
	==> storage-provisioner [250f675f25216224e2108d72e55df30f010ea1c5effa4b8053e1410cb0281255] <==
	I1120 21:22:24.045415       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:22:24.060916       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:22:24.060981       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1120 21:22:24.064042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:24.071184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:22:24.071472       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:22:24.071685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-454524_2ea0ff27-97d0-417e-b2b2-1cf921a7db12!
	I1120 21:22:24.072190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2a61f81-02bd-48af-aa0d-304db8e963b9", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-454524_2ea0ff27-97d0-417e-b2b2-1cf921a7db12 became leader
	W1120 21:22:24.077802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:24.082006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1120 21:22:24.171911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-454524_2ea0ff27-97d0-417e-b2b2-1cf921a7db12!
	W1120 21:22:26.085981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:26.090620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:28.095011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:28.101744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:30.105188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:30.109899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:32.113427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:32.117483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:34.121030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:34.124895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:36.128738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:36.133551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-936214 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-936214 --alsologtostderr -v=1: exit status 80 (2.585231222s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-936214 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:22:55.097070  568091 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:22:55.097171  568091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:55.097183  568091 out.go:374] Setting ErrFile to fd 2...
	I1120 21:22:55.097190  568091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:55.097500  568091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:22:55.097829  568091 out.go:368] Setting JSON to false
	I1120 21:22:55.097898  568091 mustload.go:66] Loading cluster: old-k8s-version-936214
	I1120 21:22:55.098400  568091 config.go:182] Loaded profile config "old-k8s-version-936214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1120 21:22:55.098869  568091 cli_runner.go:164] Run: docker container inspect old-k8s-version-936214 --format={{.State.Status}}
	I1120 21:22:55.125176  568091 host.go:66] Checking if "old-k8s-version-936214" exists ...
	I1120 21:22:55.125593  568091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:55.194125  568091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:89 SystemTime:2025-11-20 21:22:55.183564753 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:55.194974  568091 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-936214 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 21:22:55.196986  568091 out.go:179] * Pausing node old-k8s-version-936214 ... 
	I1120 21:22:55.198258  568091 host.go:66] Checking if "old-k8s-version-936214" exists ...
	I1120 21:22:55.198646  568091 ssh_runner.go:195] Run: systemctl --version
	I1120 21:22:55.198699  568091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-936214
	I1120 21:22:55.223700  568091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/old-k8s-version-936214/id_rsa Username:docker}
	I1120 21:22:55.326932  568091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:55.340642  568091 pause.go:52] kubelet running: true
	I1120 21:22:55.340718  568091 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:22:55.515071  568091 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:22:55.515165  568091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:22:55.593702  568091 cri.go:89] found id: "57b54ff401c0b8d508e17c74c88eff5afe7d3996e5addff4c5d9aa0903daf8e4"
	I1120 21:22:55.593727  568091 cri.go:89] found id: "9e2099ccb0977be64dd9119bf41c0a4befcecc3b4b75bc0141ef42f4a463c8bd"
	I1120 21:22:55.593731  568091 cri.go:89] found id: "61081684175abd031f76a9008eca57cef79e62d30dd8256df8bafe77d61e386d"
	I1120 21:22:55.593734  568091 cri.go:89] found id: "41525c982cb9247e346059bd5596f43003c4c75964a066af4ec1c6b0273b2f64"
	I1120 21:22:55.593737  568091 cri.go:89] found id: "f373d899286073d442be4cde971065cb1ee32153e63cf406a36c36f29797c385"
	I1120 21:22:55.593740  568091 cri.go:89] found id: "0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5"
	I1120 21:22:55.593742  568091 cri.go:89] found id: "4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997"
	I1120 21:22:55.593745  568091 cri.go:89] found id: "1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d"
	I1120 21:22:55.593747  568091 cri.go:89] found id: "7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392"
	I1120 21:22:55.593753  568091 cri.go:89] found id: "5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	I1120 21:22:55.593756  568091 cri.go:89] found id: "5ea10eba6b8ec6f266b0681f4ebfa5bb9cd9448a17da2f7b45372a299f33486e"
	I1120 21:22:55.593758  568091 cri.go:89] found id: ""
	I1120 21:22:55.593807  568091 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:22:55.607241  568091 retry.go:31] will retry after 151.989156ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:55Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:22:55.759683  568091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:55.773081  568091 pause.go:52] kubelet running: false
	I1120 21:22:55.773159  568091 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:22:55.921415  568091 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:22:55.921505  568091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:22:55.997278  568091 cri.go:89] found id: "57b54ff401c0b8d508e17c74c88eff5afe7d3996e5addff4c5d9aa0903daf8e4"
	I1120 21:22:55.997303  568091 cri.go:89] found id: "9e2099ccb0977be64dd9119bf41c0a4befcecc3b4b75bc0141ef42f4a463c8bd"
	I1120 21:22:55.997307  568091 cri.go:89] found id: "61081684175abd031f76a9008eca57cef79e62d30dd8256df8bafe77d61e386d"
	I1120 21:22:55.997310  568091 cri.go:89] found id: "41525c982cb9247e346059bd5596f43003c4c75964a066af4ec1c6b0273b2f64"
	I1120 21:22:55.997313  568091 cri.go:89] found id: "f373d899286073d442be4cde971065cb1ee32153e63cf406a36c36f29797c385"
	I1120 21:22:55.997316  568091 cri.go:89] found id: "0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5"
	I1120 21:22:55.997318  568091 cri.go:89] found id: "4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997"
	I1120 21:22:55.997321  568091 cri.go:89] found id: "1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d"
	I1120 21:22:55.997323  568091 cri.go:89] found id: "7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392"
	I1120 21:22:55.997329  568091 cri.go:89] found id: "5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	I1120 21:22:55.997332  568091 cri.go:89] found id: "5ea10eba6b8ec6f266b0681f4ebfa5bb9cd9448a17da2f7b45372a299f33486e"
	I1120 21:22:55.997334  568091 cri.go:89] found id: ""
	I1120 21:22:55.997376  568091 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:22:56.010020  568091 retry.go:31] will retry after 302.020308ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:56Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:22:56.312463  568091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:56.329606  568091 pause.go:52] kubelet running: false
	I1120 21:22:56.329674  568091 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:22:56.549002  568091 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:22:56.549105  568091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:22:56.636511  568091 cri.go:89] found id: "57b54ff401c0b8d508e17c74c88eff5afe7d3996e5addff4c5d9aa0903daf8e4"
	I1120 21:22:56.636542  568091 cri.go:89] found id: "9e2099ccb0977be64dd9119bf41c0a4befcecc3b4b75bc0141ef42f4a463c8bd"
	I1120 21:22:56.636548  568091 cri.go:89] found id: "61081684175abd031f76a9008eca57cef79e62d30dd8256df8bafe77d61e386d"
	I1120 21:22:56.636568  568091 cri.go:89] found id: "41525c982cb9247e346059bd5596f43003c4c75964a066af4ec1c6b0273b2f64"
	I1120 21:22:56.636573  568091 cri.go:89] found id: "f373d899286073d442be4cde971065cb1ee32153e63cf406a36c36f29797c385"
	I1120 21:22:56.636579  568091 cri.go:89] found id: "0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5"
	I1120 21:22:56.636583  568091 cri.go:89] found id: "4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997"
	I1120 21:22:56.636587  568091 cri.go:89] found id: "1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d"
	I1120 21:22:56.636592  568091 cri.go:89] found id: "7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392"
	I1120 21:22:56.636600  568091 cri.go:89] found id: "5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	I1120 21:22:56.636605  568091 cri.go:89] found id: "5ea10eba6b8ec6f266b0681f4ebfa5bb9cd9448a17da2f7b45372a299f33486e"
	I1120 21:22:56.636609  568091 cri.go:89] found id: ""
	I1120 21:22:56.636655  568091 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:22:56.651521  568091 retry.go:31] will retry after 602.328085ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:56Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:22:57.254336  568091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:22:57.271701  568091 pause.go:52] kubelet running: false
	I1120 21:22:57.271762  568091 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:22:57.484709  568091 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:22:57.484815  568091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:22:57.568828  568091 cri.go:89] found id: "57b54ff401c0b8d508e17c74c88eff5afe7d3996e5addff4c5d9aa0903daf8e4"
	I1120 21:22:57.568854  568091 cri.go:89] found id: "9e2099ccb0977be64dd9119bf41c0a4befcecc3b4b75bc0141ef42f4a463c8bd"
	I1120 21:22:57.568860  568091 cri.go:89] found id: "61081684175abd031f76a9008eca57cef79e62d30dd8256df8bafe77d61e386d"
	I1120 21:22:57.568865  568091 cri.go:89] found id: "41525c982cb9247e346059bd5596f43003c4c75964a066af4ec1c6b0273b2f64"
	I1120 21:22:57.568869  568091 cri.go:89] found id: "f373d899286073d442be4cde971065cb1ee32153e63cf406a36c36f29797c385"
	I1120 21:22:57.568873  568091 cri.go:89] found id: "0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5"
	I1120 21:22:57.568877  568091 cri.go:89] found id: "4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997"
	I1120 21:22:57.568881  568091 cri.go:89] found id: "1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d"
	I1120 21:22:57.568885  568091 cri.go:89] found id: "7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392"
	I1120 21:22:57.568899  568091 cri.go:89] found id: "5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	I1120 21:22:57.568903  568091 cri.go:89] found id: "5ea10eba6b8ec6f266b0681f4ebfa5bb9cd9448a17da2f7b45372a299f33486e"
	I1120 21:22:57.568907  568091 cri.go:89] found id: ""
	I1120 21:22:57.568953  568091 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:22:57.591376  568091 out.go:203] 
	W1120 21:22:57.592863  568091 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:22:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:22:57.592883  568091 out.go:285] * 
	* 
	W1120 21:22:57.601209  568091 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:22:57.602977  568091 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-936214 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-936214
helpers_test.go:243: (dbg) docker inspect old-k8s-version-936214:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d",
	        "Created": "2025-11-20T21:20:38.133542071Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 555467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:21:54.852001535Z",
	            "FinishedAt": "2025-11-20T21:21:53.805654243Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/hosts",
	        "LogPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d-json.log",
	        "Name": "/old-k8s-version-936214",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-936214:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-936214",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d",
	                "LowerDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-936214",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-936214/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-936214",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-936214",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-936214",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b1608728ef63f67eed5efa67278b41a62bb0595402ffc5437470759fcf3c1d3",
	            "SandboxKey": "/var/run/docker/netns/6b1608728ef6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-936214": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5b009581e5fe97051a52995f889c213d44d34cc774e441d6eb45e5a9ea52ad6",
	                    "EndpointID": "71df94f9372be4011ff7c125dfaaefa22f04147a577b14a26216be8b7b31b9a8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:59:11:d3:3e:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-936214",
	                        "6dcf9965a656"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214: exit status 2 (424.138541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-936214 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-936214 logs -n 25: (1.829628619s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:22:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:22:54.292690  567536 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:22:54.292963  567536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:54.292973  567536 out.go:374] Setting ErrFile to fd 2...
	I1120 21:22:54.292977  567536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:54.293301  567536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:22:54.293817  567536 out.go:368] Setting JSON to false
	I1120 21:22:54.295553  567536 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14716,"bootTime":1763659058,"procs":406,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:22:54.295673  567536 start.go:143] virtualization: kvm guest
	I1120 21:22:54.298271  567536 out.go:179] * [default-k8s-diff-port-454524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:22:54.299692  567536 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:22:54.299741  567536 notify.go:221] Checking for updates...
	I1120 21:22:54.302643  567536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:22:54.303909  567536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:54.305419  567536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:22:54.306876  567536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:22:54.308068  567536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:22:54.309925  567536 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:54.310721  567536 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:22:54.341993  567536 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:22:54.342191  567536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:54.422884  567536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:22:54.409623622 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:54.423037  567536 docker.go:319] overlay module found
	I1120 21:22:54.425654  567536 out.go:179] * Using the docker driver based on existing profile
	I1120 21:22:54.427097  567536 start.go:309] selected driver: docker
	I1120 21:22:54.427116  567536 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:54.427252  567536 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:22:54.427978  567536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:54.506261  567536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-20 21:22:54.492661201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:54.506657  567536 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:54.506696  567536 cni.go:84] Creating CNI manager for ""
	I1120 21:22:54.506762  567536 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:22:54.506843  567536 start.go:353] cluster config:
	{Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:54.509868  567536 out.go:179] * Starting "default-k8s-diff-port-454524" primary control-plane node in "default-k8s-diff-port-454524" cluster
	I1120 21:22:54.511335  567536 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:22:54.512793  567536 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:22:54.514007  567536 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:22:54.514053  567536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:22:54.514065  567536 cache.go:65] Caching tarball of preloaded images
	I1120 21:22:54.514074  567536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:22:54.514199  567536 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:22:54.514226  567536 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:22:54.514367  567536 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/config.json ...
	I1120 21:22:54.542704  567536 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:22:54.542727  567536 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:22:54.542747  567536 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:22:54.542789  567536 start.go:360] acquireMachinesLock for default-k8s-diff-port-454524: {Name:mkc1f74cf93a6c8d3be3c8868fe49c35c90c52de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:54.542850  567536 start.go:364] duration metric: took 40.745µs to acquireMachinesLock for "default-k8s-diff-port-454524"
	I1120 21:22:54.542869  567536 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:22:54.542874  567536 fix.go:54] fixHost starting: 
	I1120 21:22:54.543172  567536 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:22:54.568391  567536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-454524: state=Stopped err=<nil>
	W1120 21:22:54.568436  567536 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 21:22:51.367987  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	W1120 21:22:53.368857  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.410283928Z" level=info msg="Created container ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=68d42e86-f91d-4b48-9266-6a35c444e6f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.410887372Z" level=info msg="Starting container: ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a" id=d0fda6de-3e66-4445-989b-20814577c7db name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.413010754Z" level=info msg="Started container" PID=1728 containerID=ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper id=d0fda6de-3e66-4445-989b-20814577c7db name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8e88571322909d504130e9cd8d49cc4ac20f4fd537dbbca2b396fe6de9c2d2c
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.723089069Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7bbd3d82-cf4f-42e9-a2aa-03810939bc91 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.725569842Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=addd40d9-4d5d-401f-aeb5-9f884320d6bf name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.728947048Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=fc5edcc1-693e-4597-b198-595fa4e92939 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.729072916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.737143893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.737873144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.764611661Z" level=info msg="Created container fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=fc5edcc1-693e-4597-b198-595fa4e92939 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.76528981Z" level=info msg="Starting container: fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852" id=8233a29f-caa9-4c9e-b3d2-c424dec2c2ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.767249302Z" level=info msg="Started container" PID=1739 containerID=fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper id=8233a29f-caa9-4c9e-b3d2-c424dec2c2ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8e88571322909d504130e9cd8d49cc4ac20f4fd537dbbca2b396fe6de9c2d2c
	Nov 20 21:22:25 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:25.728700204Z" level=info msg="Removing container: ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a" id=bc8c6147-cc42-461f-b339-633be417853b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:25 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:25.7407216Z" level=info msg="Removed container ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=bc8c6147-cc42-461f-b339-633be417853b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.616014889Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e1415bd4-1ec9-41cb-8d0e-407aac05db5d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.618801954Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=80ce23fa-7af1-4a43-b0e7-1c45e97b2511 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.620048451Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=2f4eedff-8f8b-4593-ae3d-c4068b59346c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.620201201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.628609363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.629181449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.668788173Z" level=info msg="Created container 5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=2f4eedff-8f8b-4593-ae3d-c4068b59346c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.669696517Z" level=info msg="Starting container: 5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539" id=41fa1e40-5bab-479b-89b9-15f7d7005e72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.672111962Z" level=info msg="Started container" PID=1751 containerID=5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper id=41fa1e40-5bab-479b-89b9-15f7d7005e72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8e88571322909d504130e9cd8d49cc4ac20f4fd537dbbca2b396fe6de9c2d2c
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.772623479Z" level=info msg="Removing container: fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852" id=ea8309ba-441a-45e4-9a8d-b2eb6aa9b0b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.785914132Z" level=info msg="Removed container fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=ea8309ba-441a-45e4-9a8d-b2eb6aa9b0b6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5e27b9e0e5682       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   f8e8857132290       dashboard-metrics-scraper-5f989dc9cf-d857n       kubernetes-dashboard
	5ea10eba6b8ec       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   0e2866df9036a       kubernetes-dashboard-8694d4445c-54v92            kubernetes-dashboard
	57b54ff401c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   ea3ad66742306       storage-provisioner                              kube-system
	171a0489c7661       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   a04094c8ecd57       busybox                                          default
	9e2099ccb0977       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   cf577d1862961       coredns-5dd5756b68-5t2cr                         kube-system
	61081684175ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   a546205b3cc29       kindnet-949k6                                    kube-system
	41525c982cb92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   ea3ad66742306       storage-provisioner                              kube-system
	f373d89928607       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   8b05e13ff911e       kube-proxy-z9sk2                                 kube-system
	0c3ab02068c6a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   287d8808b4119       kube-scheduler-old-k8s-version-936214            kube-system
	4db07d3a1945b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   49735513eb0dc       kube-controller-manager-old-k8s-version-936214   kube-system
	1abb223a5577f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   00dd80194d030       kube-apiserver-old-k8s-version-936214            kube-system
	7132c523fba71       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   dd8427748a9ee       etcd-old-k8s-version-936214                      kube-system
	
	
	==> coredns [9e2099ccb0977be64dd9119bf41c0a4befcecc3b4b75bc0141ef42f4a463c8bd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59592 - 53604 "HINFO IN 4353941279716189811.6347620940285862801. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023175055s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-936214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-936214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-936214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_20_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-936214
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:21:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-936214
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                6cfc11cb-7b0f-45ce-af89-7b901c8d9e72
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-5t2cr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-936214                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-949k6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-936214             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-936214    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-z9sk2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-936214             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-d857n        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-54v92             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-936214 event: Registered Node old-k8s-version-936214 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-936214 status is now: NodeReady
	  Normal  Starting                 58s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)      kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)      kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)      kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                    node-controller  Node old-k8s-version-936214 event: Registered Node old-k8s-version-936214 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392] <==
	{"level":"info","ts":"2025-11-20T21:22:02.192619Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T21:22:02.192732Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-20T21:22:02.193083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-11-20T21:22:02.193181Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-20T21:22:02.193396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:22:02.193467Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:22:02.195903Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T21:22:02.196264Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T21:22:02.196332Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T21:22:02.196532Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-20T21:22:02.196759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-20T21:22:03.180253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-20T21:22:03.180312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-20T21:22:03.180346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-20T21:22:03.180363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.180371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.180383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.180394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.181745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:22:03.183071Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T21:22:03.181678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-936214 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T21:22:03.183643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:22:03.183696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T21:22:03.184128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T21:22:03.18562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 21:22:59 up  4:05,  0 user,  load average: 4.58, 4.72, 2.94
	Linux old-k8s-version-936214 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [61081684175abd031f76a9008eca57cef79e62d30dd8256df8bafe77d61e386d] <==
	I1120 21:22:05.193062       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:22:05.193534       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1120 21:22:05.193748       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:05.193764       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:05.193799       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:05.482181       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:05.482253       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:05.482267       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:05.482423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:22:05.982840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:05.982889       1 metrics.go:72] Registering metrics
	I1120 21:22:05.983016       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:15.483904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:15.483974       1 main.go:301] handling current node
	I1120 21:22:25.482823       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:25.482851       1 main.go:301] handling current node
	I1120 21:22:35.482712       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:35.482764       1 main.go:301] handling current node
	I1120 21:22:45.482276       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:45.482306       1 main.go:301] handling current node
	I1120 21:22:55.487905       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:55.487933       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d] <==
	I1120 21:22:04.536410       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 21:22:04.536779       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1120 21:22:04.536790       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1120 21:22:04.536878       1 aggregator.go:166] initial CRD sync complete...
	I1120 21:22:04.536886       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 21:22:04.536892       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:22:04.536899       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:22:04.545002       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:04.579808       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1120 21:22:04.632615       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1120 21:22:04.632705       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 21:22:04.632783       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 21:22:04.632838       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1120 21:22:04.639683       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 21:22:05.437152       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:05.539783       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 21:22:05.578717       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 21:22:05.600307       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:05.611794       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:05.620400       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 21:22:05.663002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.81.161"}
	I1120 21:22:05.681528       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.60.181"}
	I1120 21:22:17.117727       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1120 21:22:17.520775       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 21:22:17.569100       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997] <==
	I1120 21:22:17.194493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.525832ms"
	I1120 21:22:17.195288       1 shared_informer.go:318] Caches are synced for disruption
	I1120 21:22:17.195305       1 shared_informer.go:318] Caches are synced for ephemeral
	I1120 21:22:17.203647       1 shared_informer.go:318] Caches are synced for attach detach
	I1120 21:22:17.206188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.641378ms"
	I1120 21:22:17.206351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.29µs"
	I1120 21:22:17.206533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="157.061µs"
	I1120 21:22:17.214091       1 shared_informer.go:318] Caches are synced for expand
	I1120 21:22:17.254728       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:22:17.264123       1 shared_informer.go:318] Caches are synced for PVC protection
	I1120 21:22:17.264231       1 shared_informer.go:318] Caches are synced for stateful set
	I1120 21:22:17.267528       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:22:17.528493       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1120 21:22:17.585701       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:22:17.601186       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:22:17.601245       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 21:22:22.740357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.381503ms"
	I1120 21:22:22.740461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.642µs"
	I1120 21:22:24.734801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.393µs"
	I1120 21:22:25.739658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.548µs"
	I1120 21:22:26.743958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.405µs"
	I1120 21:22:39.788842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.467µs"
	I1120 21:22:41.440899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.055738ms"
	I1120 21:22:41.441052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.874µs"
	I1120 21:22:47.498103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.846µs"
	
	
	==> kube-proxy [f373d899286073d442be4cde971065cb1ee32153e63cf406a36c36f29797c385] <==
	I1120 21:22:05.053620       1 server_others.go:69] "Using iptables proxy"
	I1120 21:22:05.066100       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1120 21:22:05.087981       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:05.090661       1 server_others.go:152] "Using iptables Proxier"
	I1120 21:22:05.090744       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 21:22:05.090766       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 21:22:05.090807       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 21:22:05.091050       1 server.go:846] "Version info" version="v1.28.0"
	I1120 21:22:05.091086       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:05.091861       1 config.go:315] "Starting node config controller"
	I1120 21:22:05.091930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 21:22:05.092037       1 config.go:188] "Starting service config controller"
	I1120 21:22:05.092281       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 21:22:05.092243       1 config.go:97] "Starting endpoint slice config controller"
	I1120 21:22:05.092389       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 21:22:05.192862       1 shared_informer.go:318] Caches are synced for node config
	I1120 21:22:05.192869       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 21:22:05.192891       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5] <==
	E1120 21:22:04.544752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 21:22:04.546780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1120 21:22:04.546956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 21:22:04.547684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1120 21:22:04.547711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 21:22:04.547717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1120 21:22:04.547074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 21:22:04.547783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 21:22:04.547293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 21:22:04.547815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1120 21:22:04.547392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1120 21:22:04.547824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 21:22:04.547841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1120 21:22:04.547844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 21:22:04.547747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1120 21:22:04.547907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1120 21:22:04.547926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1120 21:22:04.547943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 21:22:04.547913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 21:22:04.547957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 21:22:04.548067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 21:22:04.548185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 21:22:04.548232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 21:22:04.548273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1120 21:22:05.428312       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.184406     720 topology_manager.go:215] "Topology Admit Handler" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-d857n"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.184755     720 topology_manager.go:215] "Topology Admit Handler" podUID="172155fb-773b-4a5d-b9d0-9f9043bd4b72" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-54v92"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362445     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc318bd2-9775-4aca-8a07-66fb46a6a9e3-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-d857n\" (UID: \"cc318bd2-9775-4aca-8a07-66fb46a6a9e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362516     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/172155fb-773b-4a5d-b9d0-9f9043bd4b72-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-54v92\" (UID: \"172155fb-773b-4a5d-b9d0-9f9043bd4b72\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54v92"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362656     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpddq\" (UniqueName: \"kubernetes.io/projected/172155fb-773b-4a5d-b9d0-9f9043bd4b72-kube-api-access-bpddq\") pod \"kubernetes-dashboard-8694d4445c-54v92\" (UID: \"172155fb-773b-4a5d-b9d0-9f9043bd4b72\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54v92"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362764     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9vv\" (UniqueName: \"kubernetes.io/projected/cc318bd2-9775-4aca-8a07-66fb46a6a9e3-kube-api-access-kq9vv\") pod \"dashboard-metrics-scraper-5f989dc9cf-d857n\" (UID: \"cc318bd2-9775-4aca-8a07-66fb46a6a9e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n"
	Nov 20 21:22:22 old-k8s-version-936214 kubelet[720]: I1120 21:22:22.729957     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54v92" podStartSLOduration=1.1163290319999999 podCreationTimestamp="2025-11-20 21:22:17 +0000 UTC" firstStartedPulling="2025-11-20 21:22:17.510689113 +0000 UTC m=+16.027639860" lastFinishedPulling="2025-11-20 21:22:22.124212921 +0000 UTC m=+20.641163667" observedRunningTime="2025-11-20 21:22:22.729550669 +0000 UTC m=+21.246501423" watchObservedRunningTime="2025-11-20 21:22:22.729852839 +0000 UTC m=+21.246803600"
	Nov 20 21:22:24 old-k8s-version-936214 kubelet[720]: I1120 21:22:24.722537     720 scope.go:117] "RemoveContainer" containerID="ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a"
	Nov 20 21:22:25 old-k8s-version-936214 kubelet[720]: I1120 21:22:25.727292     720 scope.go:117] "RemoveContainer" containerID="ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a"
	Nov 20 21:22:25 old-k8s-version-936214 kubelet[720]: I1120 21:22:25.727540     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:25 old-k8s-version-936214 kubelet[720]: E1120 21:22:25.728001     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:26 old-k8s-version-936214 kubelet[720]: I1120 21:22:26.731927     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:26 old-k8s-version-936214 kubelet[720]: E1120 21:22:26.732359     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:27 old-k8s-version-936214 kubelet[720]: I1120 21:22:27.733868     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:27 old-k8s-version-936214 kubelet[720]: E1120 21:22:27.734150     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: I1120 21:22:39.614310     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: I1120 21:22:39.769951     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: I1120 21:22:39.770256     720 scope.go:117] "RemoveContainer" containerID="5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: E1120 21:22:39.770563     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:47 old-k8s-version-936214 kubelet[720]: I1120 21:22:47.485851     720 scope.go:117] "RemoveContainer" containerID="5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	Nov 20 21:22:47 old-k8s-version-936214 kubelet[720]: E1120 21:22:47.486305     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: kubelet.service: Consumed 1.630s CPU time.
	
	
	==> kubernetes-dashboard [5ea10eba6b8ec6f266b0681f4ebfa5bb9cd9448a17da2f7b45372a299f33486e] <==
	2025/11/20 21:22:22 Using namespace: kubernetes-dashboard
	2025/11/20 21:22:22 Using in-cluster config to connect to apiserver
	2025/11/20 21:22:22 Using secret token for csrf signing
	2025/11/20 21:22:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:22:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:22:22 Successful initial request to the apiserver, version: v1.28.0
	2025/11/20 21:22:22 Generating JWE encryption key
	2025/11/20 21:22:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:22:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:22:22 Initializing JWE encryption key from synchronized object
	2025/11/20 21:22:22 Creating in-cluster Sidecar client
	2025/11/20 21:22:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:22 Serving insecurely on HTTP port: 9090
	2025/11/20 21:22:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:22 Starting overwatch
	
	
	==> storage-provisioner [41525c982cb9247e346059bd5596f43003c4c75964a066af4ec1c6b0273b2f64] <==
	I1120 21:22:05.011945       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:22:05.014026       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [57b54ff401c0b8d508e17c74c88eff5afe7d3996e5addff4c5d9aa0903daf8e4] <==
	I1120 21:22:05.721849       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:22:05.731785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:22:05.731837       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 21:22:23.133435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:22:23.133666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-936214_d2bfd7f9-7c77-4deb-ba0a-e57c9d3df46a!
	I1120 21:22:23.133616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e2486bc-d5c5-4ff5-8f75-9bef5c9224fc", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-936214_d2bfd7f9-7c77-4deb-ba0a-e57c9d3df46a became leader
	I1120 21:22:23.234472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-936214_d2bfd7f9-7c77-4deb-ba0a-e57c9d3df46a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-936214 -n old-k8s-version-936214
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-936214 -n old-k8s-version-936214: exit status 2 (369.132281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-936214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-936214
helpers_test.go:243: (dbg) docker inspect old-k8s-version-936214:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d",
	        "Created": "2025-11-20T21:20:38.133542071Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 555467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:21:54.852001535Z",
	            "FinishedAt": "2025-11-20T21:21:53.805654243Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/hosts",
	        "LogPath": "/var/lib/docker/containers/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d/6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d-json.log",
	        "Name": "/old-k8s-version-936214",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-936214:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-936214",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6dcf9965a65602a2cd3c58ce2692f5f2204edf3ee4525fd0b29701f1c468c40d",
	                "LowerDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61ff376db64514633fe10109a8eb527f4d36195018486d9e556fb50a910f0f33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-936214",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-936214/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-936214",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-936214",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-936214",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b1608728ef63f67eed5efa67278b41a62bb0595402ffc5437470759fcf3c1d3",
	            "SandboxKey": "/var/run/docker/netns/6b1608728ef6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-936214": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5b009581e5fe97051a52995f889c213d44d34cc774e441d6eb45e5a9ea52ad6",
	                    "EndpointID": "71df94f9372be4011ff7c125dfaaefa22f04147a577b14a26216be8b7b31b9a8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:59:11:d3:3e:d5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-936214",
	                        "6dcf9965a656"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214: exit status 2 (366.09202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-936214 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-936214 logs -n 25: (1.236682245s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:22:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:22:54.292690  567536 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:22:54.292963  567536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:54.292973  567536 out.go:374] Setting ErrFile to fd 2...
	I1120 21:22:54.292977  567536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:22:54.293301  567536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:22:54.293817  567536 out.go:368] Setting JSON to false
	I1120 21:22:54.295553  567536 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14716,"bootTime":1763659058,"procs":406,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:22:54.295673  567536 start.go:143] virtualization: kvm guest
	I1120 21:22:54.298271  567536 out.go:179] * [default-k8s-diff-port-454524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:22:54.299692  567536 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:22:54.299741  567536 notify.go:221] Checking for updates...
	I1120 21:22:54.302643  567536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:22:54.303909  567536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:22:54.305419  567536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:22:54.306876  567536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:22:54.308068  567536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:22:54.309925  567536 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:54.310721  567536 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:22:54.341993  567536 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:22:54.342191  567536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:54.422884  567536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:22:54.409623622 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:54.423037  567536 docker.go:319] overlay module found
	I1120 21:22:54.425654  567536 out.go:179] * Using the docker driver based on existing profile
	I1120 21:22:54.427097  567536 start.go:309] selected driver: docker
	I1120 21:22:54.427116  567536 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:54.427252  567536 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:22:54.427978  567536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:22:54.506261  567536 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-20 21:22:54.492661201 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:22:54.506657  567536 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:22:54.506696  567536 cni.go:84] Creating CNI manager for ""
	I1120 21:22:54.506762  567536 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:22:54.506843  567536 start.go:353] cluster config:
	{Name:default-k8s-diff-port-454524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-454524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:22:54.509868  567536 out.go:179] * Starting "default-k8s-diff-port-454524" primary control-plane node in "default-k8s-diff-port-454524" cluster
	I1120 21:22:54.511335  567536 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:22:54.512793  567536 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:22:54.514007  567536 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:22:54.514053  567536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:22:54.514065  567536 cache.go:65] Caching tarball of preloaded images
	I1120 21:22:54.514074  567536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:22:54.514199  567536 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:22:54.514226  567536 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:22:54.514367  567536 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/config.json ...
	I1120 21:22:54.542704  567536 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:22:54.542727  567536 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:22:54.542747  567536 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:22:54.542789  567536 start.go:360] acquireMachinesLock for default-k8s-diff-port-454524: {Name:mkc1f74cf93a6c8d3be3c8868fe49c35c90c52de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:22:54.542850  567536 start.go:364] duration metric: took 40.745µs to acquireMachinesLock for "default-k8s-diff-port-454524"
	I1120 21:22:54.542869  567536 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:22:54.542874  567536 fix.go:54] fixHost starting: 
	I1120 21:22:54.543172  567536 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:22:54.568391  567536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-454524: state=Stopped err=<nil>
	W1120 21:22:54.568436  567536 fix.go:138] unexpected machine state, will restart: <nil>
	W1120 21:22:51.367987  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	W1120 21:22:53.368857  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	W1120 21:22:56.393789  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:22:58.892913  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	I1120 21:22:54.571144  567536 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-454524" ...
	I1120 21:22:54.571252  567536 cli_runner.go:164] Run: docker start default-k8s-diff-port-454524
	I1120 21:22:54.949782  567536 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:22:54.977945  567536 kic.go:430] container "default-k8s-diff-port-454524" state is running.
	I1120 21:22:54.978485  567536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-454524
	I1120 21:22:55.006403  567536 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/default-k8s-diff-port-454524/config.json ...
	I1120 21:22:55.006690  567536 machine.go:94] provisionDockerMachine start ...
	I1120 21:22:55.006776  567536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:55.034346  567536 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:55.034736  567536 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1120 21:22:55.034757  567536 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:22:55.035477  567536 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36652->127.0.0.1:33123: read: connection reset by peer
	I1120 21:22:58.189746  567536 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-454524
	
	I1120 21:22:58.189789  567536 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-454524"
	I1120 21:22:58.189873  567536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:58.217429  567536 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:58.217758  567536 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1120 21:22:58.217782  567536 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-454524 && echo "default-k8s-diff-port-454524" | sudo tee /etc/hostname
	I1120 21:22:58.384459  567536 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-454524
	
	I1120 21:22:58.384551  567536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:58.409666  567536 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:58.409913  567536 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1120 21:22:58.409933  567536 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-454524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-454524/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-454524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:22:58.566976  567536 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:22:58.567014  567536 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:22:58.567040  567536 ubuntu.go:190] setting up certificates
	I1120 21:22:58.567052  567536 provision.go:84] configureAuth start
	I1120 21:22:58.567109  567536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-454524
	I1120 21:22:58.590860  567536 provision.go:143] copyHostCerts
	I1120 21:22:58.590944  567536 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:22:58.590965  567536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:22:58.591041  567536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:22:58.591172  567536 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:22:58.591190  567536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:22:58.591606  567536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:22:58.591711  567536 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:22:58.591719  567536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:22:58.592132  567536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:22:58.592272  567536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-454524 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-454524 localhost minikube]
	I1120 21:22:58.805534  567536 provision.go:177] copyRemoteCerts
	I1120 21:22:58.805626  567536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:22:58.805680  567536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:58.826950  567536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/default-k8s-diff-port-454524/id_rsa Username:docker}
	I1120 21:22:58.931331  567536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:22:58.952928  567536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1120 21:22:58.975062  567536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:22:59.001075  567536 provision.go:87] duration metric: took 434.010766ms to configureAuth
	I1120 21:22:59.001102  567536 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:22:59.001381  567536 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:22:59.001497  567536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:22:59.025109  567536 main.go:143] libmachine: Using SSH client type: native
	I1120 21:22:59.025374  567536 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1120 21:22:59.025394  567536 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1120 21:22:55.867762  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	W1120 21:22:57.868197  560374 pod_ready.go:104] pod "coredns-66bc5c9577-knwbq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.410283928Z" level=info msg="Created container ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=68d42e86-f91d-4b48-9266-6a35c444e6f6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.410887372Z" level=info msg="Starting container: ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a" id=d0fda6de-3e66-4445-989b-20814577c7db name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.413010754Z" level=info msg="Started container" PID=1728 containerID=ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper id=d0fda6de-3e66-4445-989b-20814577c7db name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8e88571322909d504130e9cd8d49cc4ac20f4fd537dbbca2b396fe6de9c2d2c
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.723089069Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7bbd3d82-cf4f-42e9-a2aa-03810939bc91 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.725569842Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=addd40d9-4d5d-401f-aeb5-9f884320d6bf name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.728947048Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=fc5edcc1-693e-4597-b198-595fa4e92939 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.729072916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.737143893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.737873144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.764611661Z" level=info msg="Created container fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=fc5edcc1-693e-4597-b198-595fa4e92939 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.76528981Z" level=info msg="Starting container: fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852" id=8233a29f-caa9-4c9e-b3d2-c424dec2c2ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:24 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:24.767249302Z" level=info msg="Started container" PID=1739 containerID=fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper id=8233a29f-caa9-4c9e-b3d2-c424dec2c2ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8e88571322909d504130e9cd8d49cc4ac20f4fd537dbbca2b396fe6de9c2d2c
	Nov 20 21:22:25 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:25.728700204Z" level=info msg="Removing container: ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a" id=bc8c6147-cc42-461f-b339-633be417853b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:25 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:25.7407216Z" level=info msg="Removed container ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=bc8c6147-cc42-461f-b339-633be417853b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.616014889Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e1415bd4-1ec9-41cb-8d0e-407aac05db5d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.618801954Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=80ce23fa-7af1-4a43-b0e7-1c45e97b2511 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.620048451Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=2f4eedff-8f8b-4593-ae3d-c4068b59346c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.620201201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.628609363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.629181449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.668788173Z" level=info msg="Created container 5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=2f4eedff-8f8b-4593-ae3d-c4068b59346c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.669696517Z" level=info msg="Starting container: 5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539" id=41fa1e40-5bab-479b-89b9-15f7d7005e72 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.672111962Z" level=info msg="Started container" PID=1751 containerID=5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper id=41fa1e40-5bab-479b-89b9-15f7d7005e72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8e88571322909d504130e9cd8d49cc4ac20f4fd537dbbca2b396fe6de9c2d2c
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.772623479Z" level=info msg="Removing container: fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852" id=ea8309ba-441a-45e4-9a8d-b2eb6aa9b0b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:39 old-k8s-version-936214 crio[563]: time="2025-11-20T21:22:39.785914132Z" level=info msg="Removed container fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n/dashboard-metrics-scraper" id=ea8309ba-441a-45e4-9a8d-b2eb6aa9b0b6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5e27b9e0e5682       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   f8e8857132290       dashboard-metrics-scraper-5f989dc9cf-d857n       kubernetes-dashboard
	5ea10eba6b8ec       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   0e2866df9036a       kubernetes-dashboard-8694d4445c-54v92            kubernetes-dashboard
	57b54ff401c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   ea3ad66742306       storage-provisioner                              kube-system
	171a0489c7661       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   a04094c8ecd57       busybox                                          default
	9e2099ccb0977       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   cf577d1862961       coredns-5dd5756b68-5t2cr                         kube-system
	61081684175ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   a546205b3cc29       kindnet-949k6                                    kube-system
	41525c982cb92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   ea3ad66742306       storage-provisioner                              kube-system
	f373d89928607       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   8b05e13ff911e       kube-proxy-z9sk2                                 kube-system
	0c3ab02068c6a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   287d8808b4119       kube-scheduler-old-k8s-version-936214            kube-system
	4db07d3a1945b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   49735513eb0dc       kube-controller-manager-old-k8s-version-936214   kube-system
	1abb223a5577f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   00dd80194d030       kube-apiserver-old-k8s-version-936214            kube-system
	7132c523fba71       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   dd8427748a9ee       etcd-old-k8s-version-936214                      kube-system
	
	
	==> coredns [9e2099ccb0977be64dd9119bf41c0a4befcecc3b4b75bc0141ef42f4a463c8bd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59592 - 53604 "HINFO IN 4353941279716189811.6347620940285862801. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023175055s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-936214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-936214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=old-k8s-version-936214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_20_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-936214
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:22:35 +0000   Thu, 20 Nov 2025 21:21:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-936214
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                6cfc11cb-7b0f-45ce-af89-7b901c8d9e72
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-5t2cr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-old-k8s-version-936214                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-949k6                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-936214             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-936214    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-z9sk2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-936214             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-d857n        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-54v92             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-936214 event: Registered Node old-k8s-version-936214 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-936214 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node old-k8s-version-936214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node old-k8s-version-936214 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-936214 event: Registered Node old-k8s-version-936214 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [7132c523fba7141184a2ef1f247ec4eb206ca7f30d8629666ab53b80e2e69392] <==
	{"level":"info","ts":"2025-11-20T21:22:02.192619Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-20T21:22:02.192732Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-20T21:22:02.193083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-11-20T21:22:02.193181Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-20T21:22:02.193396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:22:02.193467Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-20T21:22:02.195903Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-20T21:22:02.196264Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-20T21:22:02.196332Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-20T21:22:02.196532Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-20T21:22:02.196759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-20T21:22:03.180253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-20T21:22:03.180312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-20T21:22:03.180346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-20T21:22:03.180363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.180371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.180383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.180394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-20T21:22:03.181745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:22:03.183071Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-20T21:22:03.181678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-936214 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-20T21:22:03.183643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-20T21:22:03.183696Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-20T21:22:03.184128Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-20T21:22:03.18562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 21:23:01 up  4:05,  0 user,  load average: 4.58, 4.72, 2.94
	Linux old-k8s-version-936214 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [61081684175abd031f76a9008eca57cef79e62d30dd8256df8bafe77d61e386d] <==
	I1120 21:22:05.193062       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:22:05.193534       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1120 21:22:05.193748       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:05.193764       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:05.193799       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:05.482181       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:05.482253       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:05.482267       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:05.482423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:22:05.982840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:05.982889       1 metrics.go:72] Registering metrics
	I1120 21:22:05.983016       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:15.483904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:15.483974       1 main.go:301] handling current node
	I1120 21:22:25.482823       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:25.482851       1 main.go:301] handling current node
	I1120 21:22:35.482712       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:35.482764       1 main.go:301] handling current node
	I1120 21:22:45.482276       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:45.482306       1 main.go:301] handling current node
	I1120 21:22:55.487905       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1120 21:22:55.487933       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1abb223a5577f9fa1bd9aeb94acfa6c5b167f63a94e72eafc2ed20e6bef9394d] <==
	I1120 21:22:04.536410       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1120 21:22:04.536779       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1120 21:22:04.536790       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1120 21:22:04.536878       1 aggregator.go:166] initial CRD sync complete...
	I1120 21:22:04.536886       1 autoregister_controller.go:141] Starting autoregister controller
	I1120 21:22:04.536892       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:22:04.536899       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:22:04.545002       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:04.579808       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1120 21:22:04.632615       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1120 21:22:04.632705       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1120 21:22:04.632783       1 shared_informer.go:318] Caches are synced for configmaps
	I1120 21:22:04.632838       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1120 21:22:04.639683       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 21:22:05.437152       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:05.539783       1 controller.go:624] quota admission added evaluator for: namespaces
	I1120 21:22:05.578717       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1120 21:22:05.600307       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:05.611794       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:05.620400       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1120 21:22:05.663002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.81.161"}
	I1120 21:22:05.681528       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.60.181"}
	I1120 21:22:17.117727       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1120 21:22:17.520775       1 controller.go:624] quota admission added evaluator for: endpoints
	I1120 21:22:17.569100       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4db07d3a1945bd5a3ba2dd1d5a6e1c3272a1fd19c49a8bf6741fdf8e8a1f5997] <==
	I1120 21:22:17.194493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.525832ms"
	I1120 21:22:17.195288       1 shared_informer.go:318] Caches are synced for disruption
	I1120 21:22:17.195305       1 shared_informer.go:318] Caches are synced for ephemeral
	I1120 21:22:17.203647       1 shared_informer.go:318] Caches are synced for attach detach
	I1120 21:22:17.206188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.641378ms"
	I1120 21:22:17.206351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.29µs"
	I1120 21:22:17.206533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="157.061µs"
	I1120 21:22:17.214091       1 shared_informer.go:318] Caches are synced for expand
	I1120 21:22:17.254728       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:22:17.264123       1 shared_informer.go:318] Caches are synced for PVC protection
	I1120 21:22:17.264231       1 shared_informer.go:318] Caches are synced for stateful set
	I1120 21:22:17.267528       1 shared_informer.go:318] Caches are synced for resource quota
	I1120 21:22:17.528493       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1120 21:22:17.585701       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:22:17.601186       1 shared_informer.go:318] Caches are synced for garbage collector
	I1120 21:22:17.601245       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1120 21:22:22.740357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.381503ms"
	I1120 21:22:22.740461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.642µs"
	I1120 21:22:24.734801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.393µs"
	I1120 21:22:25.739658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.548µs"
	I1120 21:22:26.743958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.405µs"
	I1120 21:22:39.788842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.467µs"
	I1120 21:22:41.440899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.055738ms"
	I1120 21:22:41.441052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.874µs"
	I1120 21:22:47.498103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.846µs"
	
	
	==> kube-proxy [f373d899286073d442be4cde971065cb1ee32153e63cf406a36c36f29797c385] <==
	I1120 21:22:05.053620       1 server_others.go:69] "Using iptables proxy"
	I1120 21:22:05.066100       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1120 21:22:05.087981       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:05.090661       1 server_others.go:152] "Using iptables Proxier"
	I1120 21:22:05.090744       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1120 21:22:05.090766       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1120 21:22:05.090807       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1120 21:22:05.091050       1 server.go:846] "Version info" version="v1.28.0"
	I1120 21:22:05.091086       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:05.091861       1 config.go:315] "Starting node config controller"
	I1120 21:22:05.091930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1120 21:22:05.092037       1 config.go:188] "Starting service config controller"
	I1120 21:22:05.092281       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1120 21:22:05.092243       1 config.go:97] "Starting endpoint slice config controller"
	I1120 21:22:05.092389       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1120 21:22:05.192862       1 shared_informer.go:318] Caches are synced for node config
	I1120 21:22:05.192869       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1120 21:22:05.192891       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [0c3ab02068c6a50c2ebd57387bc3f723bbfb949f1d1566a148f96aa54f5ec1a5] <==
	E1120 21:22:04.544752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1120 21:22:04.546780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1120 21:22:04.546956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1120 21:22:04.547684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1120 21:22:04.547711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1120 21:22:04.547717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W1120 21:22:04.547074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1120 21:22:04.547783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1120 21:22:04.547293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1120 21:22:04.547815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1120 21:22:04.547392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1120 21:22:04.547824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1120 21:22:04.547841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1120 21:22:04.547844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1120 21:22:04.547747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1120 21:22:04.547907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1120 21:22:04.547926       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1120 21:22:04.547943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1120 21:22:04.547913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1120 21:22:04.547957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1120 21:22:04.548067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1120 21:22:04.548185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 21:22:04.548232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1120 21:22:04.548273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1120 21:22:05.428312       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.184406     720 topology_manager.go:215] "Topology Admit Handler" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-d857n"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.184755     720 topology_manager.go:215] "Topology Admit Handler" podUID="172155fb-773b-4a5d-b9d0-9f9043bd4b72" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-54v92"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362445     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc318bd2-9775-4aca-8a07-66fb46a6a9e3-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-d857n\" (UID: \"cc318bd2-9775-4aca-8a07-66fb46a6a9e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362516     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/172155fb-773b-4a5d-b9d0-9f9043bd4b72-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-54v92\" (UID: \"172155fb-773b-4a5d-b9d0-9f9043bd4b72\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54v92"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362656     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpddq\" (UniqueName: \"kubernetes.io/projected/172155fb-773b-4a5d-b9d0-9f9043bd4b72-kube-api-access-bpddq\") pod \"kubernetes-dashboard-8694d4445c-54v92\" (UID: \"172155fb-773b-4a5d-b9d0-9f9043bd4b72\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54v92"
	Nov 20 21:22:17 old-k8s-version-936214 kubelet[720]: I1120 21:22:17.362764     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9vv\" (UniqueName: \"kubernetes.io/projected/cc318bd2-9775-4aca-8a07-66fb46a6a9e3-kube-api-access-kq9vv\") pod \"dashboard-metrics-scraper-5f989dc9cf-d857n\" (UID: \"cc318bd2-9775-4aca-8a07-66fb46a6a9e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n"
	Nov 20 21:22:22 old-k8s-version-936214 kubelet[720]: I1120 21:22:22.729957     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-54v92" podStartSLOduration=1.1163290319999999 podCreationTimestamp="2025-11-20 21:22:17 +0000 UTC" firstStartedPulling="2025-11-20 21:22:17.510689113 +0000 UTC m=+16.027639860" lastFinishedPulling="2025-11-20 21:22:22.124212921 +0000 UTC m=+20.641163667" observedRunningTime="2025-11-20 21:22:22.729550669 +0000 UTC m=+21.246501423" watchObservedRunningTime="2025-11-20 21:22:22.729852839 +0000 UTC m=+21.246803600"
	Nov 20 21:22:24 old-k8s-version-936214 kubelet[720]: I1120 21:22:24.722537     720 scope.go:117] "RemoveContainer" containerID="ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a"
	Nov 20 21:22:25 old-k8s-version-936214 kubelet[720]: I1120 21:22:25.727292     720 scope.go:117] "RemoveContainer" containerID="ba43f2e5a0ff01ae6f493cfb6551fbbd00aca76f4f8eb296d66edb1dcc6e1d6a"
	Nov 20 21:22:25 old-k8s-version-936214 kubelet[720]: I1120 21:22:25.727540     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:25 old-k8s-version-936214 kubelet[720]: E1120 21:22:25.728001     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:26 old-k8s-version-936214 kubelet[720]: I1120 21:22:26.731927     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:26 old-k8s-version-936214 kubelet[720]: E1120 21:22:26.732359     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:27 old-k8s-version-936214 kubelet[720]: I1120 21:22:27.733868     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:27 old-k8s-version-936214 kubelet[720]: E1120 21:22:27.734150     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: I1120 21:22:39.614310     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: I1120 21:22:39.769951     720 scope.go:117] "RemoveContainer" containerID="fe5d8cd8d02fe17518a43a31b9e658e6d1f71e7217b574ed2cc9a6b6a3943852"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: I1120 21:22:39.770256     720 scope.go:117] "RemoveContainer" containerID="5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	Nov 20 21:22:39 old-k8s-version-936214 kubelet[720]: E1120 21:22:39.770563     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:47 old-k8s-version-936214 kubelet[720]: I1120 21:22:47.485851     720 scope.go:117] "RemoveContainer" containerID="5e27b9e0e5682359cc8e393f4987573aa83b814056075951b43134ed8e3ef539"
	Nov 20 21:22:47 old-k8s-version-936214 kubelet[720]: E1120 21:22:47.486305     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-d857n_kubernetes-dashboard(cc318bd2-9775-4aca-8a07-66fb46a6a9e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-d857n" podUID="cc318bd2-9775-4aca-8a07-66fb46a6a9e3"
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:22:55 old-k8s-version-936214 systemd[1]: kubelet.service: Consumed 1.630s CPU time.
	
	
	==> kubernetes-dashboard [5ea10eba6b8ec6f266b0681f4ebfa5bb9cd9448a17da2f7b45372a299f33486e] <==
	2025/11/20 21:22:22 Starting overwatch
	2025/11/20 21:22:22 Using namespace: kubernetes-dashboard
	2025/11/20 21:22:22 Using in-cluster config to connect to apiserver
	2025/11/20 21:22:22 Using secret token for csrf signing
	2025/11/20 21:22:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:22:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:22:22 Successful initial request to the apiserver, version: v1.28.0
	2025/11/20 21:22:22 Generating JWE encryption key
	2025/11/20 21:22:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:22:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:22:22 Initializing JWE encryption key from synchronized object
	2025/11/20 21:22:22 Creating in-cluster Sidecar client
	2025/11/20 21:22:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:22 Serving insecurely on HTTP port: 9090
	2025/11/20 21:22:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41525c982cb9247e346059bd5596f43003c4c75964a066af4ec1c6b0273b2f64] <==
	I1120 21:22:05.011945       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:22:05.014026       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [57b54ff401c0b8d508e17c74c88eff5afe7d3996e5addff4c5d9aa0903daf8e4] <==
	I1120 21:22:05.721849       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1120 21:22:05.731785       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1120 21:22:05.731837       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1120 21:22:23.133435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1120 21:22:23.133666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-936214_d2bfd7f9-7c77-4deb-ba0a-e57c9d3df46a!
	I1120 21:22:23.133616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e2486bc-d5c5-4ff5-8f75-9bef5c9224fc", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-936214_d2bfd7f9-7c77-4deb-ba0a-e57c9d3df46a became leader
	I1120 21:22:23.234472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-936214_d2bfd7f9-7c77-4deb-ba0a-e57c9d3df46a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-936214 -n old-k8s-version-936214
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-936214 -n old-k8s-version-936214: exit status 2 (354.818389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-936214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-166874 --alsologtostderr -v=1
E1120 21:23:17.981670  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-166874 --alsologtostderr -v=1: exit status 80 (2.244598751s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-166874 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:23:17.695577  574240 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:17.695836  574240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:17.695846  574240 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:17.695850  574240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:17.696051  574240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:17.696289  574240 out.go:368] Setting JSON to false
	I1120 21:23:17.696342  574240 mustload.go:66] Loading cluster: no-preload-166874
	I1120 21:23:17.696658  574240 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:17.697058  574240 cli_runner.go:164] Run: docker container inspect no-preload-166874 --format={{.State.Status}}
	I1120 21:23:17.715743  574240 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:23:17.716151  574240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:17.783578  574240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-20 21:23:17.771560083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:17.784303  574240 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-166874 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 21:23:17.786099  574240 out.go:179] * Pausing node no-preload-166874 ... 
	I1120 21:23:17.787349  574240 host.go:66] Checking if "no-preload-166874" exists ...
	I1120 21:23:17.787651  574240 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:17.787702  574240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-166874
	I1120 21:23:17.806595  574240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/no-preload-166874/id_rsa Username:docker}
	I1120 21:23:17.904830  574240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:17.919430  574240 pause.go:52] kubelet running: true
	I1120 21:23:17.919504  574240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:18.103440  574240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:18.103536  574240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:18.182363  574240 cri.go:89] found id: "b3875ad2b3649c5e0dc8dcfc3269750df1e05a74c2414af8cf9687dc24bbfecc"
	I1120 21:23:18.182393  574240 cri.go:89] found id: "2332c3c8973f298a863eed6a0515b849aa4f8d1a2f77ba4b6a85de7956b2c193"
	I1120 21:23:18.182398  574240 cri.go:89] found id: "5d3ac01cd9d5f02148c348fd391c7e9136aea61e2874596e0d6011e60b790d4f"
	I1120 21:23:18.182404  574240 cri.go:89] found id: "1075c3753d9c86ea762b5e73fac57de6dd495a8909c8d3e9513494941f62d1a9"
	I1120 21:23:18.182408  574240 cri.go:89] found id: "6d5e4d46c87453c01cee4d13fc2422303e9a061de4a51f4e61c977a7279d60ab"
	I1120 21:23:18.182414  574240 cri.go:89] found id: "61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab"
	I1120 21:23:18.182419  574240 cri.go:89] found id: "fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87"
	I1120 21:23:18.182423  574240 cri.go:89] found id: "4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605"
	I1120 21:23:18.182427  574240 cri.go:89] found id: "e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0"
	I1120 21:23:18.182433  574240 cri.go:89] found id: "b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc"
	I1120 21:23:18.182439  574240 cri.go:89] found id: "a894cbc2c5d9a1366aa228439fd6e6836895cc4703f84abb78c459dd47ea9041"
	I1120 21:23:18.182441  574240 cri.go:89] found id: ""
	I1120 21:23:18.182480  574240 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:18.194959  574240 retry.go:31] will retry after 173.653042ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:18Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:18.369406  574240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:18.383091  574240 pause.go:52] kubelet running: false
	I1120 21:23:18.383165  574240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:18.553763  574240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:18.553864  574240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:18.632821  574240 cri.go:89] found id: "b3875ad2b3649c5e0dc8dcfc3269750df1e05a74c2414af8cf9687dc24bbfecc"
	I1120 21:23:18.632853  574240 cri.go:89] found id: "2332c3c8973f298a863eed6a0515b849aa4f8d1a2f77ba4b6a85de7956b2c193"
	I1120 21:23:18.632859  574240 cri.go:89] found id: "5d3ac01cd9d5f02148c348fd391c7e9136aea61e2874596e0d6011e60b790d4f"
	I1120 21:23:18.632863  574240 cri.go:89] found id: "1075c3753d9c86ea762b5e73fac57de6dd495a8909c8d3e9513494941f62d1a9"
	I1120 21:23:18.632867  574240 cri.go:89] found id: "6d5e4d46c87453c01cee4d13fc2422303e9a061de4a51f4e61c977a7279d60ab"
	I1120 21:23:18.632872  574240 cri.go:89] found id: "61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab"
	I1120 21:23:18.632876  574240 cri.go:89] found id: "fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87"
	I1120 21:23:18.632879  574240 cri.go:89] found id: "4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605"
	I1120 21:23:18.632883  574240 cri.go:89] found id: "e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0"
	I1120 21:23:18.632910  574240 cri.go:89] found id: "b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc"
	I1120 21:23:18.632918  574240 cri.go:89] found id: "a894cbc2c5d9a1366aa228439fd6e6836895cc4703f84abb78c459dd47ea9041"
	I1120 21:23:18.632921  574240 cri.go:89] found id: ""
	I1120 21:23:18.633007  574240 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:18.646311  574240 retry.go:31] will retry after 399.477462ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:18Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:19.047001  574240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:19.061463  574240 pause.go:52] kubelet running: false
	I1120 21:23:19.061521  574240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:19.217751  574240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:19.217862  574240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:19.289090  574240 cri.go:89] found id: "b3875ad2b3649c5e0dc8dcfc3269750df1e05a74c2414af8cf9687dc24bbfecc"
	I1120 21:23:19.289114  574240 cri.go:89] found id: "2332c3c8973f298a863eed6a0515b849aa4f8d1a2f77ba4b6a85de7956b2c193"
	I1120 21:23:19.289119  574240 cri.go:89] found id: "5d3ac01cd9d5f02148c348fd391c7e9136aea61e2874596e0d6011e60b790d4f"
	I1120 21:23:19.289123  574240 cri.go:89] found id: "1075c3753d9c86ea762b5e73fac57de6dd495a8909c8d3e9513494941f62d1a9"
	I1120 21:23:19.289127  574240 cri.go:89] found id: "6d5e4d46c87453c01cee4d13fc2422303e9a061de4a51f4e61c977a7279d60ab"
	I1120 21:23:19.289132  574240 cri.go:89] found id: "61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab"
	I1120 21:23:19.289136  574240 cri.go:89] found id: "fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87"
	I1120 21:23:19.289140  574240 cri.go:89] found id: "4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605"
	I1120 21:23:19.289152  574240 cri.go:89] found id: "e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0"
	I1120 21:23:19.289160  574240 cri.go:89] found id: "b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc"
	I1120 21:23:19.289164  574240 cri.go:89] found id: "a894cbc2c5d9a1366aa228439fd6e6836895cc4703f84abb78c459dd47ea9041"
	I1120 21:23:19.289168  574240 cri.go:89] found id: ""
	I1120 21:23:19.289212  574240 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:19.301897  574240 retry.go:31] will retry after 307.057259ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:19Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:19.609424  574240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:19.623425  574240 pause.go:52] kubelet running: false
	I1120 21:23:19.623495  574240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:19.771986  574240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:19.772097  574240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:19.844485  574240 cri.go:89] found id: "b3875ad2b3649c5e0dc8dcfc3269750df1e05a74c2414af8cf9687dc24bbfecc"
	I1120 21:23:19.844508  574240 cri.go:89] found id: "2332c3c8973f298a863eed6a0515b849aa4f8d1a2f77ba4b6a85de7956b2c193"
	I1120 21:23:19.844512  574240 cri.go:89] found id: "5d3ac01cd9d5f02148c348fd391c7e9136aea61e2874596e0d6011e60b790d4f"
	I1120 21:23:19.844515  574240 cri.go:89] found id: "1075c3753d9c86ea762b5e73fac57de6dd495a8909c8d3e9513494941f62d1a9"
	I1120 21:23:19.844518  574240 cri.go:89] found id: "6d5e4d46c87453c01cee4d13fc2422303e9a061de4a51f4e61c977a7279d60ab"
	I1120 21:23:19.844522  574240 cri.go:89] found id: "61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab"
	I1120 21:23:19.844525  574240 cri.go:89] found id: "fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87"
	I1120 21:23:19.844527  574240 cri.go:89] found id: "4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605"
	I1120 21:23:19.844530  574240 cri.go:89] found id: "e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0"
	I1120 21:23:19.844542  574240 cri.go:89] found id: "b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc"
	I1120 21:23:19.844545  574240 cri.go:89] found id: "a894cbc2c5d9a1366aa228439fd6e6836895cc4703f84abb78c459dd47ea9041"
	I1120 21:23:19.844547  574240 cri.go:89] found id: ""
	I1120 21:23:19.844586  574240 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:19.859329  574240 out.go:203] 
	W1120 21:23:19.860697  574240 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:23:19.860714  574240 out.go:285] * 
	* 
	W1120 21:23:19.865610  574240 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:23:19.866977  574240 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-166874 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-166874
helpers_test.go:243: (dbg) docker inspect no-preload-166874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f",
	        "Created": "2025-11-20T21:20:53.087247999Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560637,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:22:20.633930186Z",
	            "FinishedAt": "2025-11-20T21:22:19.353806735Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/hostname",
	        "HostsPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/hosts",
	        "LogPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f-json.log",
	        "Name": "/no-preload-166874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-166874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-166874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f",
	                "LowerDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-166874",
	                "Source": "/var/lib/docker/volumes/no-preload-166874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-166874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-166874",
	                "name.minikube.sigs.k8s.io": "no-preload-166874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "25bd0c7bfb6fc4f7fc8b66f291eed35078e05eda56de441c54602323fcb2c602",
	            "SandboxKey": "/var/run/docker/netns/25bd0c7bfb6f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-166874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6bf71dac4c7dbfe0cbfa1577ea48c4b78277a2aaefe1bc1e081bb5b02ff78f81",
	                    "EndpointID": "3ab818d21929ab7cd518caa2f1973836c91e65bfec4b89c6d966a1deefe65ea0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:ae:d6:7c:c5:db",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-166874",
	                        "745a5057ecd0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874: exit status 2 (365.713141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-166874 logs -n 25
E1120 21:23:20.543591  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-166874 logs -n 25: (1.175807522s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:06.049245  571789 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:06.049501  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049513  571789 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:06.049519  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049841  571789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:06.050567  571789 out.go:368] Setting JSON to false
	I1120 21:23:06.052400  571789 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14728,"bootTime":1763659058,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:06.052535  571789 start.go:143] virtualization: kvm guest
	I1120 21:23:06.055111  571789 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:06.056602  571789 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:06.056605  571789 notify.go:221] Checking for updates...
	I1120 21:23:06.062930  571789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:06.067567  571789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:06.069232  571789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:06.070624  571789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:06.072902  571789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:06.074784  571789 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.074945  571789 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075081  571789 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075229  571789 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:06.120678  571789 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:06.120819  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.216315  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.199460321 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.216453  571789 docker.go:319] overlay module found
	I1120 21:23:06.218400  571789 out.go:179] * Using the docker driver based on user configuration
	I1120 21:23:06.219702  571789 start.go:309] selected driver: docker
	I1120 21:23:06.219714  571789 start.go:930] validating driver "docker" against <nil>
	I1120 21:23:06.219729  571789 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:06.220696  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.302193  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.29041782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.302376  571789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1120 21:23:06.302402  571789 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1120 21:23:06.302588  571789 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:06.305364  571789 out.go:179] * Using Docker driver with root privileges
	I1120 21:23:06.306728  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:06.306783  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:06.306792  571789 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:23:06.306891  571789 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:06.308307  571789 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:06.309596  571789 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:06.311056  571789 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:06.312309  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.312345  571789 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:06.312360  571789 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:06.312412  571789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:06.312479  571789 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:06.312494  571789 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:06.312653  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:06.312677  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json: {Name:mkf4f376b35371249315ca8102adde29558a901f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:06.340931  571789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:06.340959  571789 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:06.340975  571789 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:06.341010  571789 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:06.341132  571789 start.go:364] duration metric: took 97.864µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:06.341163  571789 start.go:93] Provisioning new machine with config: &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:06.341279  571789 start.go:125] createHost starting for "" (driver="docker")
	W1120 21:23:05.393230  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:07.891482  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	I1120 21:23:05.205163  567536 addons.go:515] duration metric: took 2.420707864s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:05.695398  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:05.702083  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:05.702112  567536 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:06.195506  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:06.201376  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 21:23:06.202743  567536 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:06.202819  567536 api_server.go:131] duration metric: took 1.008149378s to wait for apiserver health ...
	I1120 21:23:06.202844  567536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:06.209670  567536 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:06.209779  567536 system_pods.go:61] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.209798  567536 system_pods.go:61] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.209807  567536 system_pods.go:61] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.209817  567536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.209832  567536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.209838  567536 system_pods.go:61] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.209845  567536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.209856  567536 system_pods.go:61] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.209865  567536 system_pods.go:74] duration metric: took 7.010955ms to wait for pod list to return data ...
	I1120 21:23:06.209877  567536 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:06.215993  567536 default_sa.go:45] found service account: "default"
	I1120 21:23:06.216099  567536 default_sa.go:55] duration metric: took 6.211471ms for default service account to be created ...
	I1120 21:23:06.216167  567536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:23:06.219656  567536 system_pods.go:86] 8 kube-system pods found
	I1120 21:23:06.219693  567536 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.219715  567536 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.219722  567536 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.219731  567536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.219739  567536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.219745  567536 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.219754  567536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.219761  567536 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.219771  567536 system_pods.go:126] duration metric: took 3.576854ms to wait for k8s-apps to be running ...
	I1120 21:23:06.219780  567536 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:23:06.219827  567536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:06.242346  567536 system_svc.go:56] duration metric: took 22.555852ms WaitForService to wait for kubelet
	I1120 21:23:06.242379  567536 kubeadm.go:587] duration metric: took 3.45805481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:23:06.242401  567536 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:06.248588  567536 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:06.248623  567536 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:06.248641  567536 node_conditions.go:105] duration metric: took 6.233957ms to run NodePressure ...
	I1120 21:23:06.248657  567536 start.go:242] waiting for startup goroutines ...
	I1120 21:23:06.248666  567536 start.go:247] waiting for cluster config update ...
	I1120 21:23:06.248680  567536 start.go:256] writing updated cluster config ...
	I1120 21:23:06.249011  567536 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:06.254875  567536 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:06.260944  567536 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:23:08.267255  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:06.343254  571789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:23:06.343455  571789 start.go:159] libmachine.API.Create for "newest-cni-678421" (driver="docker")
	I1120 21:23:06.343482  571789 client.go:173] LocalClient.Create starting
	I1120 21:23:06.343553  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:23:06.343582  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343598  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.343655  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:23:06.343676  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343686  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.344001  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:23:06.362461  571789 cli_runner.go:211] docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:23:06.362549  571789 network_create.go:284] running [docker network inspect newest-cni-678421] to gather additional debugging logs...
	I1120 21:23:06.362568  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421
	W1120 21:23:06.383025  571789 cli_runner.go:211] docker network inspect newest-cni-678421 returned with exit code 1
	I1120 21:23:06.383064  571789 network_create.go:287] error running [docker network inspect newest-cni-678421]: docker network inspect newest-cni-678421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-678421 not found
	I1120 21:23:06.383078  571789 network_create.go:289] output of [docker network inspect newest-cni-678421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-678421 not found
	
	** /stderr **
	I1120 21:23:06.383171  571789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:06.403776  571789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:23:06.404546  571789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:23:06.405526  571789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:23:06.406341  571789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1ab433249a4f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:74:b3:0e:d4:91} reservation:<nil>}
	I1120 21:23:06.407123  571789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-4a91837c366f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:66:0c:88:d0:b5:58} reservation:<nil>}
	I1120 21:23:06.407767  571789 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6bf71dac4c7d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:29:7e:d9:60:3c} reservation:<nil>}
	I1120 21:23:06.408763  571789 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f00f50}
	I1120 21:23:06.408794  571789 network_create.go:124] attempt to create docker network newest-cni-678421 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1120 21:23:06.408864  571789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-678421 newest-cni-678421
	I1120 21:23:06.467067  571789 network_create.go:108] docker network newest-cni-678421 192.168.103.0/24 created
	I1120 21:23:06.467117  571789 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-678421" container
	I1120 21:23:06.467193  571789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:23:06.485312  571789 cli_runner.go:164] Run: docker volume create newest-cni-678421 --label name.minikube.sigs.k8s.io=newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:23:06.505057  571789 oci.go:103] Successfully created a docker volume newest-cni-678421
	I1120 21:23:06.505146  571789 cli_runner.go:164] Run: docker run --rm --name newest-cni-678421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --entrypoint /usr/bin/test -v newest-cni-678421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:23:06.958057  571789 oci.go:107] Successfully prepared a docker volume newest-cni-678421
	I1120 21:23:06.958140  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.958154  571789 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:23:06.958256  571789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1120 21:23:09.892319  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:11.894030  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:10.767056  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:12.767732  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:11.773995  571789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.815679837s)
	I1120 21:23:11.774033  571789 kic.go:203] duration metric: took 4.815876955s to extract preloaded images to volume ...
	W1120 21:23:11.774136  571789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:23:11.774185  571789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:23:11.774253  571789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:23:11.850339  571789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-678421 --name newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-678421 --network newest-cni-678421 --ip 192.168.103.2 --volume newest-cni-678421:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:23:12.533350  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Running}}
	I1120 21:23:12.555197  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.575307  571789 cli_runner.go:164] Run: docker exec newest-cni-678421 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:23:12.632671  571789 oci.go:144] the created container "newest-cni-678421" has a running status.
	I1120 21:23:12.632720  571789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa...
	I1120 21:23:12.863151  571789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:23:12.899100  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.920234  571789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:23:12.920260  571789 kic_runner.go:114] Args: [docker exec --privileged newest-cni-678421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:23:12.970999  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.993837  571789 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:12.993956  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.013867  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.014157  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.014178  571789 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:13.161308  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.161339  571789 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:13.161406  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.181829  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.182058  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.182073  571789 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:13.328927  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.329019  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.349098  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.349376  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.349398  571789 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:13.484139  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:13.484177  571789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:13.484259  571789 ubuntu.go:190] setting up certificates
	I1120 21:23:13.484275  571789 provision.go:84] configureAuth start
	I1120 21:23:13.484350  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:13.503703  571789 provision.go:143] copyHostCerts
	I1120 21:23:13.503779  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:13.503794  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:13.503883  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:13.504018  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:13.504032  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:13.504073  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:13.504158  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:13.504168  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:13.504202  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:13.504315  571789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:13.626916  571789 provision.go:177] copyRemoteCerts
	I1120 21:23:13.626988  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:13.627031  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.646188  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:13.742867  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:13.765755  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:13.787099  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:13.810322  571789 provision.go:87] duration metric: took 326.026448ms to configureAuth
	I1120 21:23:13.810353  571789 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:13.810568  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:13.810697  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.837968  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.838338  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.838366  571789 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:14.162945  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:14.162974  571789 machine.go:97] duration metric: took 1.169111697s to provisionDockerMachine
	I1120 21:23:14.162987  571789 client.go:176] duration metric: took 7.819496914s to LocalClient.Create
	I1120 21:23:14.163010  571789 start.go:167] duration metric: took 7.81955499s to libmachine.API.Create "newest-cni-678421"
	I1120 21:23:14.163019  571789 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:14.163030  571789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:14.163109  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:14.163159  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.187939  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.299873  571789 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:14.304403  571789 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:14.304436  571789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:14.304458  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:14.304511  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:14.304580  571789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:14.304666  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:14.315114  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:14.341203  571789 start.go:296] duration metric: took 178.161388ms for postStartSetup
	I1120 21:23:14.341644  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.364787  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:14.365126  571789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:14.365189  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.388501  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.491729  571789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:14.498714  571789 start.go:128] duration metric: took 8.157415645s to createHost
	I1120 21:23:14.498748  571789 start.go:83] releasing machines lock for "newest-cni-678421", held for 8.157600418s
	I1120 21:23:14.498845  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.524498  571789 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:14.524558  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.524576  571789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:14.524652  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.549686  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.550328  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.730932  571789 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:14.739895  571789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:14.789379  571789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:14.795855  571789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:14.795934  571789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:14.829432  571789 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:23:14.829462  571789 start.go:496] detecting cgroup driver to use...
	I1120 21:23:14.829510  571789 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:14.829589  571789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:14.851761  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:14.867809  571789 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:14.867934  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:14.892255  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:14.918730  571789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:15.037147  571789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:15.171533  571789 docker.go:234] disabling docker service ...
	I1120 21:23:15.171611  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:15.196938  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:15.214136  571789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:15.323780  571789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:15.444697  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:15.464324  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:15.484640  571789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:15.484705  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.499771  571789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:15.499842  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.512691  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.526079  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.538826  571789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:15.550121  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.562853  571789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.582104  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.595993  571789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:15.606890  571789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:15.617086  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:15.737596  571789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:16.600257  571789 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:16.600349  571789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:16.605892  571789 start.go:564] Will wait 60s for crictl version
	I1120 21:23:16.606027  571789 ssh_runner.go:195] Run: which crictl
	I1120 21:23:16.610690  571789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:16.637058  571789 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:16.637154  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.670116  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.704078  571789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:16.705267  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:16.724295  571789 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:16.728925  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.741905  571789 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1120 21:23:14.392714  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:16.891564  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:15.268024  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:17.768172  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:41 no-preload-166874 crio[561]: time="2025-11-20T21:22:41.319772933Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:41 no-preload-166874 crio[561]: time="2025-11-20T21:22:41.323492761Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:41 no-preload-166874 crio[561]: time="2025-11-20T21:22:41.323525321Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.454638714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a49754bb-dd29-4b92-a3b8-b1d7923c140e name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.457501908Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1ca12f62-0bcb-44f0-bc28-56c17405ce27 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.460429531Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=d98765ef-229d-49d4-b41f-03d784eb0df0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.460584078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.467090834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.467591302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.518488245Z" level=info msg="Created container b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=d98765ef-229d-49d4-b41f-03d784eb0df0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.519255405Z" level=info msg="Starting container: b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1" id=86bf27e9-8a4d-4b71-a832-0dce40a5c7fb name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.52112505Z" level=info msg="Started container" PID=1762 containerID=b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper id=86bf27e9-8a4d-4b71-a832-0dce40a5c7fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee52d014781c6cda9904ed7690822b023e3d0e960e7883c66df4ad51a411c4e0
	Nov 20 21:22:52 no-preload-166874 crio[561]: time="2025-11-20T21:22:52.551448779Z" level=info msg="Removing container: bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d" id=aa3fab69-a34c-4670-a904-571c72a24a52 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:52 no-preload-166874 crio[561]: time="2025-11-20T21:22:52.563471184Z" level=info msg="Removed container bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=aa3fab69-a34c-4670-a904-571c72a24a52 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.454088067Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=731577ae-a71e-46ab-a73d-5db51ed56c32 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.45524913Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b6dac1c4-5ec4-4a5b-b596-761d2cb1bd7f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.456433461Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=a89fee83-e598-4718-b07d-ad69b5d17277 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.456592955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.463956389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.464611943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.518304559Z" level=info msg="Created container b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=a89fee83-e598-4718-b07d-ad69b5d17277 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.51927645Z" level=info msg="Starting container: b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc" id=065a4f4f-cede-4b5f-af9e-93ed0b5ce16d name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.521694418Z" level=info msg="Started container" PID=1795 containerID=b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper id=065a4f4f-cede-4b5f-af9e-93ed0b5ce16d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee52d014781c6cda9904ed7690822b023e3d0e960e7883c66df4ad51a411c4e0
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.619722572Z" level=info msg="Removing container: b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1" id=b2071b91-c98c-44c5-88a1-d1326eee0f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.632950232Z" level=info msg="Removed container b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=b2071b91-c98c-44c5-88a1-d1326eee0f42 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b3b90890c938a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   ee52d014781c6       dashboard-metrics-scraper-6ffb444bf9-hk6zc   kubernetes-dashboard
	a894cbc2c5d9a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   b46626e9a1900       kubernetes-dashboard-855c9754f9-nljn5        kubernetes-dashboard
	b3875ad2b3649       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Running             storage-provisioner         1                   70b393d45cbcd       storage-provisioner                          kube-system
	e7a63af5ecd5a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   f762bf97ab975       busybox                                      default
	2332c3c8973f2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   dc30dfebd1b48       coredns-66bc5c9577-knwbq                     kube-system
	5d3ac01cd9d5f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   e9e5d30490a15       kindnet-w6hk4                                kube-system
	1075c3753d9c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   70b393d45cbcd       storage-provisioner                          kube-system
	6d5e4d46c8745       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   ff28845816bc6       kube-proxy-8mtnk                             kube-system
	61e04250b1fad       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   2ffc9e6c27d9b       kube-controller-manager-no-preload-166874    kube-system
	fc952607f1385       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   04fbd2b0394a1       kube-apiserver-no-preload-166874             kube-system
	4abdd4a141a63       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   32392c1979e7a       kube-scheduler-no-preload-166874             kube-system
	e79d1f101bc84       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   7f5597a0c9a9e       etcd-no-preload-166874                       kube-system
	
	
	==> coredns [2332c3c8973f298a863eed6a0515b849aa4f8d1a2f77ba4b6a85de7956b2c193] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35066 - 55821 "HINFO IN 6859988204258836041.3573769941691693785. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018043157s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-166874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-166874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-166874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_21_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-166874
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-166874
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                ad73315e-0ad1-465a-82ef-174a9e25f51f
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-knwbq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-166874                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-w6hk4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-166874              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-166874     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-8mtnk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-166874              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-hk6zc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nljn5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-166874 event: Registered Node no-preload-166874 in Controller
	  Normal  NodeReady                95s                kubelet          Node no-preload-166874 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-166874 event: Registered Node no-preload-166874 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0] <==
	{"level":"warn","ts":"2025-11-20T21:22:29.068082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.077416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.083988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.090585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.098339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.104604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.112166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.118325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.124948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.141438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.147549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.154654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.162325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.176882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.184159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.190808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.197958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.205666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.213804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.220642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.243838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.251723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.266856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.332064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36982","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:22:40.217979Z","caller":"traceutil/trace.go:172","msg":"trace[1792678598] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"135.196089ms","start":"2025-11-20T21:22:40.082759Z","end":"2025-11-20T21:22:40.217955Z","steps":["trace[1792678598] 'process raft request'  (duration: 65.557523ms)","trace[1792678598] 'compare'  (duration: 69.519832ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:23:21 up  4:05,  0 user,  load average: 4.04, 4.59, 2.93
	Linux no-preload-166874 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d3ac01cd9d5f02148c348fd391c7e9136aea61e2874596e0d6011e60b790d4f] <==
	I1120 21:22:31.101835       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:22:31.102119       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1120 21:22:31.102375       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:31.102398       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:31.102432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:31.303858       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:31.303890       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:31.303901       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:31.304066       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:22:31.804308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:31.804345       1 metrics.go:72] Registering metrics
	I1120 21:22:31.804453       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:41.303706       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:22:41.303790       1 main.go:301] handling current node
	I1120 21:22:51.305399       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:22:51.305431       1 main.go:301] handling current node
	I1120 21:23:01.303783       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:23:01.303836       1 main.go:301] handling current node
	I1120 21:23:11.310810       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:23:11.310862       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87] <==
	I1120 21:22:29.828200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:22:29.828211       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 21:22:29.832174       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:22:29.833692       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:22:29.835241       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:22:29.840536       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:22:29.843799       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:22:29.845951       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:22:29.846011       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:22:29.856909       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:22:29.857005       1 policy_source.go:240] refreshing policies
	I1120 21:22:29.857475       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:30.132355       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:22:30.159733       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:22:30.180138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:30.187207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:30.194723       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:22:30.228271       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.96.0"}
	I1120 21:22:30.237738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.57.154"}
	I1120 21:22:30.744025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:33.512012       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:22:33.563290       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:22:33.563290       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:22:33.662035       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:22:33.662035       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab] <==
	I1120 21:22:33.125269       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:22:33.127478       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:22:33.129748       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:22:33.131966       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:22:33.138197       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:22:33.139398       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:22:33.141734       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:22:33.158388       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:22:33.158411       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:22:33.158448       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:22:33.158485       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:22:33.158514       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:22:33.158546       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:22:33.158563       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:22:33.158569       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:22:33.159939       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:22:33.164634       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:33.165460       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:33.173860       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:22:33.173918       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:22:33.173959       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:22:33.173970       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:22:33.173977       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:22:33.177159       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:22:33.180436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [6d5e4d46c87453c01cee4d13fc2422303e9a061de4a51f4e61c977a7279d60ab] <==
	I1120 21:22:30.875448       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:22:30.969360       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:22:31.070209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:22:31.070279       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1120 21:22:31.070376       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:22:31.090267       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:31.090329       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:22:31.095838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:22:31.096282       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:22:31.096324       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:31.097667       1 config.go:200] "Starting service config controller"
	I1120 21:22:31.097706       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:22:31.097717       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:22:31.097729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:22:31.097802       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:22:31.097832       1 config.go:309] "Starting node config controller"
	I1120 21:22:31.097840       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:22:31.097838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:22:31.097847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:22:31.198050       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:22:31.198259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:22:31.198283       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605] <==
	I1120 21:22:28.406755       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:22:29.823599       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:22:29.823636       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:29.831263       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:22:29.831299       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:22:29.831302       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:29.831330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:29.831420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:29.831435       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:29.831719       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:22:29.831815       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:22:29.932174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:29.932258       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:29.932312       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 20 21:22:36 no-preload-166874 kubelet[715]: I1120 21:22:36.499060     715 scope.go:117] "RemoveContainer" containerID="a0069d8b7ab256789648daae55a9a1614c7094b5fbe4cc3a780ac97cbf74e516"
	Nov 20 21:22:37 no-preload-166874 kubelet[715]: I1120 21:22:37.503789     715 scope.go:117] "RemoveContainer" containerID="a0069d8b7ab256789648daae55a9a1614c7094b5fbe4cc3a780ac97cbf74e516"
	Nov 20 21:22:37 no-preload-166874 kubelet[715]: I1120 21:22:37.504125     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:37 no-preload-166874 kubelet[715]: E1120 21:22:37.504371     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:38 no-preload-166874 kubelet[715]: I1120 21:22:38.507936     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:38 no-preload-166874 kubelet[715]: E1120 21:22:38.508127     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:40 no-preload-166874 kubelet[715]: I1120 21:22:40.571408     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:40 no-preload-166874 kubelet[715]: E1120 21:22:40.571627     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:44 no-preload-166874 kubelet[715]: I1120 21:22:44.167526     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nljn5" podStartSLOduration=4.33760598 podStartE2EDuration="11.167499125s" podCreationTimestamp="2025-11-20 21:22:33 +0000 UTC" firstStartedPulling="2025-11-20 21:22:33.954279727 +0000 UTC m=+6.592594948" lastFinishedPulling="2025-11-20 21:22:40.784172865 +0000 UTC m=+13.422488093" observedRunningTime="2025-11-20 21:22:41.52771218 +0000 UTC m=+14.166027414" watchObservedRunningTime="2025-11-20 21:22:44.167499125 +0000 UTC m=+16.805814360"
	Nov 20 21:22:51 no-preload-166874 kubelet[715]: I1120 21:22:51.454033     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:51 no-preload-166874 kubelet[715]: I1120 21:22:51.545617     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:22:51 no-preload-166874 kubelet[715]: E1120 21:22:51.545795     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:52 no-preload-166874 kubelet[715]: I1120 21:22:52.550179     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:52 no-preload-166874 kubelet[715]: I1120 21:22:52.550452     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:22:52 no-preload-166874 kubelet[715]: E1120 21:22:52.550641     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:23:00 no-preload-166874 kubelet[715]: I1120 21:23:00.572341     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:23:00 no-preload-166874 kubelet[715]: E1120 21:23:00.572563     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: I1120 21:23:14.453442     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: I1120 21:23:14.618371     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: I1120 21:23:14.618662     715 scope.go:117] "RemoveContainer" containerID="b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: E1120 21:23:14.618849     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:23:18 no-preload-166874 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:18 no-preload-166874 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:18 no-preload-166874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:23:18 no-preload-166874 systemd[1]: kubelet.service: Consumed 1.741s CPU time.
	
	
	==> kubernetes-dashboard [a894cbc2c5d9a1366aa228439fd6e6836895cc4703f84abb78c459dd47ea9041] <==
	2025/11/20 21:22:40 Starting overwatch
	2025/11/20 21:22:40 Using namespace: kubernetes-dashboard
	2025/11/20 21:22:40 Using in-cluster config to connect to apiserver
	2025/11/20 21:22:40 Using secret token for csrf signing
	2025/11/20 21:22:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:22:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:22:40 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:22:40 Generating JWE encryption key
	2025/11/20 21:22:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:22:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:22:40 Initializing JWE encryption key from synchronized object
	2025/11/20 21:22:40 Creating in-cluster Sidecar client
	2025/11/20 21:22:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:41 Serving insecurely on HTTP port: 9090
	2025/11/20 21:23:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1075c3753d9c86ea762b5e73fac57de6dd495a8909c8d3e9513494941f62d1a9] <==
	I1120 21:22:30.840057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:22:30.845484       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b3875ad2b3649c5e0dc8dcfc3269750df1e05a74c2414af8cf9687dc24bbfecc] <==
	W1120 21:22:56.976523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:58.980699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:22:58.990113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:00.993510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:00.998042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:03.002725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:03.010963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:05.015125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:05.020305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:07.024433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:07.031262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:09.035017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:09.039775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:11.043586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:11.058375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:13.064884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:13.071914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:15.076319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:15.080740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:17.085633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:17.090008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:19.093564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:19.098563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.102492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.106984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-166874 -n no-preload-166874
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-166874 -n no-preload-166874: exit status 2 (371.349918ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-166874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-166874
helpers_test.go:243: (dbg) docker inspect no-preload-166874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f",
	        "Created": "2025-11-20T21:20:53.087247999Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560637,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:22:20.633930186Z",
	            "FinishedAt": "2025-11-20T21:22:19.353806735Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/hostname",
	        "HostsPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/hosts",
	        "LogPath": "/var/lib/docker/containers/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f/745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f-json.log",
	        "Name": "/no-preload-166874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-166874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-166874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "745a5057ecd0eccb680509b8d8713b532b6d3df910beea6dc62fb29140dcd78f",
	                "LowerDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2daa61846758c8cc184410c309974eb50a57fb4410192e7e30df4f5849ea102a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-166874",
	                "Source": "/var/lib/docker/volumes/no-preload-166874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-166874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-166874",
	                "name.minikube.sigs.k8s.io": "no-preload-166874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "25bd0c7bfb6fc4f7fc8b66f291eed35078e05eda56de441c54602323fcb2c602",
	            "SandboxKey": "/var/run/docker/netns/25bd0c7bfb6f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-166874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6bf71dac4c7dbfe0cbfa1577ea48c4b78277a2aaefe1bc1e081bb5b02ff78f81",
	                    "EndpointID": "3ab818d21929ab7cd518caa2f1973836c91e65bfec4b89c6d966a1deefe65ea0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "b2:ae:d6:7c:c5:db",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-166874",
	                        "745a5057ecd0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874: exit status 2 (348.636212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-166874 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-166874 logs -n 25: (1.197018802s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-936763 sudo crio config                                                                                                                                                                                                     │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p custom-flannel-936763                                                                                                                                                                                                                      │ custom-flannel-936763        │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ delete  │ -p disable-driver-mounts-454805                                                                                                                                                                                                               │ disable-driver-mounts-454805 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:06.049245  571789 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:06.049501  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049513  571789 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:06.049519  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049841  571789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:06.050567  571789 out.go:368] Setting JSON to false
	I1120 21:23:06.052400  571789 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14728,"bootTime":1763659058,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:06.052535  571789 start.go:143] virtualization: kvm guest
	I1120 21:23:06.055111  571789 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:06.056602  571789 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:06.056605  571789 notify.go:221] Checking for updates...
	I1120 21:23:06.062930  571789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:06.067567  571789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:06.069232  571789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:06.070624  571789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:06.072902  571789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:06.074784  571789 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.074945  571789 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075081  571789 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075229  571789 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:06.120678  571789 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:06.120819  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.216315  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.199460321 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.216453  571789 docker.go:319] overlay module found
	I1120 21:23:06.218400  571789 out.go:179] * Using the docker driver based on user configuration
	I1120 21:23:06.219702  571789 start.go:309] selected driver: docker
	I1120 21:23:06.219714  571789 start.go:930] validating driver "docker" against <nil>
	I1120 21:23:06.219729  571789 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:06.220696  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.302193  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.29041782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.302376  571789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1120 21:23:06.302402  571789 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1120 21:23:06.302588  571789 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:06.305364  571789 out.go:179] * Using Docker driver with root privileges
	I1120 21:23:06.306728  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:06.306783  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:06.306792  571789 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:23:06.306891  571789 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:06.308307  571789 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:06.309596  571789 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:06.311056  571789 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:06.312309  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.312345  571789 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:06.312360  571789 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:06.312412  571789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:06.312479  571789 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:06.312494  571789 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:06.312653  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:06.312677  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json: {Name:mkf4f376b35371249315ca8102adde29558a901f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:06.340931  571789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:06.340959  571789 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:06.340975  571789 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:06.341010  571789 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:06.341132  571789 start.go:364] duration metric: took 97.864µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:06.341163  571789 start.go:93] Provisioning new machine with config: &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:06.341279  571789 start.go:125] createHost starting for "" (driver="docker")
	W1120 21:23:05.393230  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:07.891482  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	I1120 21:23:05.205163  567536 addons.go:515] duration metric: took 2.420707864s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:05.695398  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:05.702083  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:05.702112  567536 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:06.195506  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:06.201376  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 21:23:06.202743  567536 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:06.202819  567536 api_server.go:131] duration metric: took 1.008149378s to wait for apiserver health ...
	I1120 21:23:06.202844  567536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:06.209670  567536 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:06.209779  567536 system_pods.go:61] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.209798  567536 system_pods.go:61] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.209807  567536 system_pods.go:61] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.209817  567536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.209832  567536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.209838  567536 system_pods.go:61] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.209845  567536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.209856  567536 system_pods.go:61] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.209865  567536 system_pods.go:74] duration metric: took 7.010955ms to wait for pod list to return data ...
	I1120 21:23:06.209877  567536 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:06.215993  567536 default_sa.go:45] found service account: "default"
	I1120 21:23:06.216099  567536 default_sa.go:55] duration metric: took 6.211471ms for default service account to be created ...
	I1120 21:23:06.216167  567536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:23:06.219656  567536 system_pods.go:86] 8 kube-system pods found
	I1120 21:23:06.219693  567536 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.219715  567536 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.219722  567536 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.219731  567536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.219739  567536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.219745  567536 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.219754  567536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.219761  567536 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.219771  567536 system_pods.go:126] duration metric: took 3.576854ms to wait for k8s-apps to be running ...
	I1120 21:23:06.219780  567536 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:23:06.219827  567536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:06.242346  567536 system_svc.go:56] duration metric: took 22.555852ms WaitForService to wait for kubelet
	I1120 21:23:06.242379  567536 kubeadm.go:587] duration metric: took 3.45805481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:23:06.242401  567536 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:06.248588  567536 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:06.248623  567536 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:06.248641  567536 node_conditions.go:105] duration metric: took 6.233957ms to run NodePressure ...
	I1120 21:23:06.248657  567536 start.go:242] waiting for startup goroutines ...
	I1120 21:23:06.248666  567536 start.go:247] waiting for cluster config update ...
	I1120 21:23:06.248680  567536 start.go:256] writing updated cluster config ...
	I1120 21:23:06.249011  567536 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:06.254875  567536 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:06.260944  567536 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:23:08.267255  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:06.343254  571789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:23:06.343455  571789 start.go:159] libmachine.API.Create for "newest-cni-678421" (driver="docker")
	I1120 21:23:06.343482  571789 client.go:173] LocalClient.Create starting
	I1120 21:23:06.343553  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:23:06.343582  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343598  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.343655  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:23:06.343676  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343686  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.344001  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:23:06.362461  571789 cli_runner.go:211] docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:23:06.362549  571789 network_create.go:284] running [docker network inspect newest-cni-678421] to gather additional debugging logs...
	I1120 21:23:06.362568  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421
	W1120 21:23:06.383025  571789 cli_runner.go:211] docker network inspect newest-cni-678421 returned with exit code 1
	I1120 21:23:06.383064  571789 network_create.go:287] error running [docker network inspect newest-cni-678421]: docker network inspect newest-cni-678421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-678421 not found
	I1120 21:23:06.383078  571789 network_create.go:289] output of [docker network inspect newest-cni-678421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-678421 not found
	
	** /stderr **
	I1120 21:23:06.383171  571789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:06.403776  571789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:23:06.404546  571789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:23:06.405526  571789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:23:06.406341  571789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1ab433249a4f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:74:b3:0e:d4:91} reservation:<nil>}
	I1120 21:23:06.407123  571789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-4a91837c366f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:66:0c:88:d0:b5:58} reservation:<nil>}
	I1120 21:23:06.407767  571789 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6bf71dac4c7d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:29:7e:d9:60:3c} reservation:<nil>}
	I1120 21:23:06.408763  571789 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f00f50}
	I1120 21:23:06.408794  571789 network_create.go:124] attempt to create docker network newest-cni-678421 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1120 21:23:06.408864  571789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-678421 newest-cni-678421
	I1120 21:23:06.467067  571789 network_create.go:108] docker network newest-cni-678421 192.168.103.0/24 created
	I1120 21:23:06.467117  571789 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-678421" container
	I1120 21:23:06.467193  571789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:23:06.485312  571789 cli_runner.go:164] Run: docker volume create newest-cni-678421 --label name.minikube.sigs.k8s.io=newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:23:06.505057  571789 oci.go:103] Successfully created a docker volume newest-cni-678421
	I1120 21:23:06.505146  571789 cli_runner.go:164] Run: docker run --rm --name newest-cni-678421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --entrypoint /usr/bin/test -v newest-cni-678421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:23:06.958057  571789 oci.go:107] Successfully prepared a docker volume newest-cni-678421
	I1120 21:23:06.958140  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.958154  571789 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:23:06.958256  571789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1120 21:23:09.892319  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:11.894030  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:10.767056  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:12.767732  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:11.773995  571789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.815679837s)
	I1120 21:23:11.774033  571789 kic.go:203] duration metric: took 4.815876955s to extract preloaded images to volume ...
	W1120 21:23:11.774136  571789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:23:11.774185  571789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:23:11.774253  571789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:23:11.850339  571789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-678421 --name newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-678421 --network newest-cni-678421 --ip 192.168.103.2 --volume newest-cni-678421:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:23:12.533350  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Running}}
	I1120 21:23:12.555197  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.575307  571789 cli_runner.go:164] Run: docker exec newest-cni-678421 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:23:12.632671  571789 oci.go:144] the created container "newest-cni-678421" has a running status.
	I1120 21:23:12.632720  571789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa...
	I1120 21:23:12.863151  571789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:23:12.899100  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.920234  571789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:23:12.920260  571789 kic_runner.go:114] Args: [docker exec --privileged newest-cni-678421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:23:12.970999  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.993837  571789 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:12.993956  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.013867  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.014157  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.014178  571789 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:13.161308  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.161339  571789 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:13.161406  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.181829  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.182058  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.182073  571789 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:13.328927  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.329019  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.349098  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.349376  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.349398  571789 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:13.484139  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:13.484177  571789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:13.484259  571789 ubuntu.go:190] setting up certificates
	I1120 21:23:13.484275  571789 provision.go:84] configureAuth start
	I1120 21:23:13.484350  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:13.503703  571789 provision.go:143] copyHostCerts
	I1120 21:23:13.503779  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:13.503794  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:13.503883  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:13.504018  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:13.504032  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:13.504073  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:13.504158  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:13.504168  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:13.504202  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:13.504315  571789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:13.626916  571789 provision.go:177] copyRemoteCerts
	I1120 21:23:13.626988  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:13.627031  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.646188  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:13.742867  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:13.765755  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:13.787099  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:13.810322  571789 provision.go:87] duration metric: took 326.026448ms to configureAuth
	I1120 21:23:13.810353  571789 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:13.810568  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:13.810697  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.837968  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.838338  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.838366  571789 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:14.162945  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:14.162974  571789 machine.go:97] duration metric: took 1.169111697s to provisionDockerMachine
	I1120 21:23:14.162987  571789 client.go:176] duration metric: took 7.819496914s to LocalClient.Create
	I1120 21:23:14.163010  571789 start.go:167] duration metric: took 7.81955499s to libmachine.API.Create "newest-cni-678421"
	I1120 21:23:14.163019  571789 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:14.163030  571789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:14.163109  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:14.163159  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.187939  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.299873  571789 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:14.304403  571789 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:14.304436  571789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:14.304458  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:14.304511  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:14.304580  571789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:14.304666  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:14.315114  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:14.341203  571789 start.go:296] duration metric: took 178.161388ms for postStartSetup
	I1120 21:23:14.341644  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.364787  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:14.365126  571789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:14.365189  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.388501  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.491729  571789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:14.498714  571789 start.go:128] duration metric: took 8.157415645s to createHost
	I1120 21:23:14.498748  571789 start.go:83] releasing machines lock for "newest-cni-678421", held for 8.157600418s
	I1120 21:23:14.498845  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.524498  571789 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:14.524558  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.524576  571789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:14.524652  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.549686  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.550328  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.730932  571789 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:14.739895  571789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:14.789379  571789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:14.795855  571789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:14.795934  571789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:14.829432  571789 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:23:14.829462  571789 start.go:496] detecting cgroup driver to use...
	I1120 21:23:14.829510  571789 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:14.829589  571789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:14.851761  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:14.867809  571789 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:14.867934  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:14.892255  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:14.918730  571789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:15.037147  571789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:15.171533  571789 docker.go:234] disabling docker service ...
	I1120 21:23:15.171611  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:15.196938  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:15.214136  571789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:15.323780  571789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:15.444697  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:15.464324  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:15.484640  571789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:15.484705  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.499771  571789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:15.499842  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.512691  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.526079  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.538826  571789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:15.550121  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.562853  571789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.582104  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.595993  571789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:15.606890  571789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:15.617086  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:15.737596  571789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:16.600257  571789 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:16.600349  571789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:16.605892  571789 start.go:564] Will wait 60s for crictl version
	I1120 21:23:16.606027  571789 ssh_runner.go:195] Run: which crictl
	I1120 21:23:16.610690  571789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:16.637058  571789 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:16.637154  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.670116  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.704078  571789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:16.705267  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:16.724295  571789 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:16.728925  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.741905  571789 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1120 21:23:14.392714  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:16.891564  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:15.268024  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:17.768172  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:16.742987  571789 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:16.743128  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:16.743179  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.780101  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.780125  571789 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:16.780172  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.809837  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.809872  571789 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:16.809883  571789 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:16.810002  571789 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:16.810090  571789 ssh_runner.go:195] Run: crio config
	I1120 21:23:16.863639  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:16.863659  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:16.863681  571789 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:16.863704  571789 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:16.863822  571789 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:16.863884  571789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:16.873403  571789 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:16.873494  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:16.881985  571789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:16.896085  571789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:16.913519  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:16.928859  571789 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:16.933334  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.945027  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:17.031776  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:17.058982  571789 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:17.059010  571789 certs.go:195] generating shared ca certs ...
	I1120 21:23:17.059029  571789 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.059186  571789 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:17.059248  571789 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:17.059262  571789 certs.go:257] generating profile certs ...
	I1120 21:23:17.059323  571789 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:17.059344  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt with IP's: []
	I1120 21:23:17.213357  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt ...
	I1120 21:23:17.213389  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt: {Name:mke2db14d5c940e88a112fbde2b7f7a5c236c264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213571  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key ...
	I1120 21:23:17.213582  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key: {Name:mk64627472328d961f5d0acc5bb1ae55a18c598e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213666  571789 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:17.213689  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1120 21:23:17.465354  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb ...
	I1120 21:23:17.465382  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb: {Name:mk1f657111bdac9ee1dbd7f52b9080823e78b0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465538  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb ...
	I1120 21:23:17.465551  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb: {Name:mk0b65e76824a55204f187e73dc35407cb7853bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465624  571789 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt
	I1120 21:23:17.465704  571789 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key
	I1120 21:23:17.465758  571789 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:17.465775  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt with IP's: []
	I1120 21:23:17.786236  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt ...
	I1120 21:23:17.786271  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt: {Name:mkf64e7d9fa7e272a656caab1db35f0d50079c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.786461  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key ...
	I1120 21:23:17.786477  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key: {Name:mkadbe10d3a0cb1e1581b893a1e5760fc272fd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.787184  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:17.787274  571789 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:17.787292  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:17.787316  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:17.787339  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:17.787359  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:17.787408  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:17.788027  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:17.809571  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:17.829725  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:17.850042  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:17.870161  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:17.891028  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:17.910446  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:17.930120  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:17.949077  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:17.975331  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:17.995043  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:18.013730  571789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:18.027267  571789 ssh_runner.go:195] Run: openssl version
	I1120 21:23:18.033999  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.042006  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:18.049852  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053838  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053894  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.092857  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:18.101344  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:23:18.109957  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.119032  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:18.127682  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132138  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132200  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.181104  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:18.189712  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/254094.pem /etc/ssl/certs/51391683.0
	I1120 21:23:18.198524  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.206158  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:18.213580  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217316  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217376  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.254832  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.263424  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2540942.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.272052  571789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:18.276149  571789 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:23:18.276225  571789 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:18.276317  571789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:18.276376  571789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:18.305337  571789 cri.go:89] found id: ""
	I1120 21:23:18.305409  571789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:18.314096  571789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:23:18.322873  571789 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:23:18.322928  571789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:23:18.331021  571789 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:23:18.331048  571789 kubeadm.go:158] found existing configuration files:
	
	I1120 21:23:18.331102  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:23:18.338959  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:23:18.339007  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:23:18.346732  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:23:18.354398  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:23:18.354456  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:23:18.361888  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.370477  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:23:18.370533  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.378355  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:23:18.387242  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:23:18.387302  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:23:18.397935  571789 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:23:18.476910  571789 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 21:23:18.555544  571789 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 20 21:22:41 no-preload-166874 crio[561]: time="2025-11-20T21:22:41.319772933Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:41 no-preload-166874 crio[561]: time="2025-11-20T21:22:41.323492761Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:41 no-preload-166874 crio[561]: time="2025-11-20T21:22:41.323525321Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.454638714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a49754bb-dd29-4b92-a3b8-b1d7923c140e name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.457501908Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1ca12f62-0bcb-44f0-bc28-56c17405ce27 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.460429531Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=d98765ef-229d-49d4-b41f-03d784eb0df0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.460584078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.467090834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.467591302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.518488245Z" level=info msg="Created container b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=d98765ef-229d-49d4-b41f-03d784eb0df0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.519255405Z" level=info msg="Starting container: b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1" id=86bf27e9-8a4d-4b71-a832-0dce40a5c7fb name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:22:51 no-preload-166874 crio[561]: time="2025-11-20T21:22:51.52112505Z" level=info msg="Started container" PID=1762 containerID=b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper id=86bf27e9-8a4d-4b71-a832-0dce40a5c7fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee52d014781c6cda9904ed7690822b023e3d0e960e7883c66df4ad51a411c4e0
	Nov 20 21:22:52 no-preload-166874 crio[561]: time="2025-11-20T21:22:52.551448779Z" level=info msg="Removing container: bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d" id=aa3fab69-a34c-4670-a904-571c72a24a52 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:22:52 no-preload-166874 crio[561]: time="2025-11-20T21:22:52.563471184Z" level=info msg="Removed container bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=aa3fab69-a34c-4670-a904-571c72a24a52 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.454088067Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=731577ae-a71e-46ab-a73d-5db51ed56c32 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.45524913Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b6dac1c4-5ec4-4a5b-b596-761d2cb1bd7f name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.456433461Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=a89fee83-e598-4718-b07d-ad69b5d17277 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.456592955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.463956389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.464611943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.518304559Z" level=info msg="Created container b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=a89fee83-e598-4718-b07d-ad69b5d17277 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.51927645Z" level=info msg="Starting container: b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc" id=065a4f4f-cede-4b5f-af9e-93ed0b5ce16d name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.521694418Z" level=info msg="Started container" PID=1795 containerID=b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper id=065a4f4f-cede-4b5f-af9e-93ed0b5ce16d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee52d014781c6cda9904ed7690822b023e3d0e960e7883c66df4ad51a411c4e0
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.619722572Z" level=info msg="Removing container: b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1" id=b2071b91-c98c-44c5-88a1-d1326eee0f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:14 no-preload-166874 crio[561]: time="2025-11-20T21:23:14.632950232Z" level=info msg="Removed container b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc/dashboard-metrics-scraper" id=b2071b91-c98c-44c5-88a1-d1326eee0f42 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b3b90890c938a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   ee52d014781c6       dashboard-metrics-scraper-6ffb444bf9-hk6zc   kubernetes-dashboard
	a894cbc2c5d9a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   b46626e9a1900       kubernetes-dashboard-855c9754f9-nljn5        kubernetes-dashboard
	b3875ad2b3649       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Running             storage-provisioner         1                   70b393d45cbcd       storage-provisioner                          kube-system
	e7a63af5ecd5a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   f762bf97ab975       busybox                                      default
	2332c3c8973f2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   dc30dfebd1b48       coredns-66bc5c9577-knwbq                     kube-system
	5d3ac01cd9d5f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   e9e5d30490a15       kindnet-w6hk4                                kube-system
	1075c3753d9c8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   70b393d45cbcd       storage-provisioner                          kube-system
	6d5e4d46c8745       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   ff28845816bc6       kube-proxy-8mtnk                             kube-system
	61e04250b1fad       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   2ffc9e6c27d9b       kube-controller-manager-no-preload-166874    kube-system
	fc952607f1385       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   04fbd2b0394a1       kube-apiserver-no-preload-166874             kube-system
	4abdd4a141a63       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   32392c1979e7a       kube-scheduler-no-preload-166874             kube-system
	e79d1f101bc84       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   7f5597a0c9a9e       etcd-no-preload-166874                       kube-system
	
	
	==> coredns [2332c3c8973f298a863eed6a0515b849aa4f8d1a2f77ba4b6a85de7956b2c193] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35066 - 55821 "HINFO IN 6859988204258836041.3573769941691693785. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018043157s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-166874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-166874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=no-preload-166874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_21_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-166874
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:23:00 +0000   Thu, 20 Nov 2025 21:21:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-166874
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                ad73315e-0ad1-465a-82ef-174a9e25f51f
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-knwbq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-166874                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-w6hk4                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-166874              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-166874     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-8mtnk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-166874              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-hk6zc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nljn5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                 node-controller  Node no-preload-166874 event: Registered Node no-preload-166874 in Controller
	  Normal  NodeReady                97s                  kubelet          Node no-preload-166874 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-166874 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-166874 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-166874 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node no-preload-166874 event: Registered Node no-preload-166874 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [e79d1f101bc84e961615911048f92beccf8f7107f3579cf1d3b9871e84687fa0] <==
	{"level":"warn","ts":"2025-11-20T21:22:29.068082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.077416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.083988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.090585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.098339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.104604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.112166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.118325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.124948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.141438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.147549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.154654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.162325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.176882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.184159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.190808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.197958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.205666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.213804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.220642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.243838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.251723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.266856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:29.332064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36982","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:22:40.217979Z","caller":"traceutil/trace.go:172","msg":"trace[1792678598] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"135.196089ms","start":"2025-11-20T21:22:40.082759Z","end":"2025-11-20T21:22:40.217955Z","steps":["trace[1792678598] 'process raft request'  (duration: 65.557523ms)","trace[1792678598] 'compare'  (duration: 69.519832ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:23:23 up  4:05,  0 user,  load average: 4.19, 4.62, 2.95
	Linux no-preload-166874 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d3ac01cd9d5f02148c348fd391c7e9136aea61e2874596e0d6011e60b790d4f] <==
	I1120 21:22:31.101835       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:22:31.102119       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1120 21:22:31.102375       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:31.102398       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:31.102432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:31.303858       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:31.303890       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:31.303901       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:31.304066       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:22:31.804308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:31.804345       1 metrics.go:72] Registering metrics
	I1120 21:22:31.804453       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:41.303706       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:22:41.303790       1 main.go:301] handling current node
	I1120 21:22:51.305399       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:22:51.305431       1 main.go:301] handling current node
	I1120 21:23:01.303783       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:23:01.303836       1 main.go:301] handling current node
	I1120 21:23:11.310810       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:23:11.310862       1 main.go:301] handling current node
	I1120 21:23:21.309376       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1120 21:23:21.309414       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc952607f13856160424086480b5695232ec19743fc65d60befb339f7fa0bb87] <==
	I1120 21:22:29.828200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:22:29.828211       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 21:22:29.832174       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:22:29.833692       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:22:29.835241       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:22:29.840536       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:22:29.843799       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:22:29.845951       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:22:29.846011       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:22:29.856909       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:22:29.857005       1 policy_source.go:240] refreshing policies
	I1120 21:22:29.857475       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:30.132355       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:22:30.159733       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:22:30.180138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:30.187207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:30.194723       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:22:30.228271       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.96.0"}
	I1120 21:22:30.237738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.57.154"}
	I1120 21:22:30.744025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:33.512012       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:22:33.563290       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:22:33.563290       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:22:33.662035       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:22:33.662035       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [61e04250b1fad5affc7e7b2cf988fd20a167428f6bd5ca907a9770f968f47fab] <==
	I1120 21:22:33.125269       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:22:33.127478       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:22:33.129748       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:22:33.131966       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:22:33.138197       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:22:33.139398       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:22:33.141734       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:22:33.158388       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:22:33.158411       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:22:33.158448       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:22:33.158485       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:22:33.158514       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:22:33.158546       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:22:33.158563       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:22:33.158569       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:22:33.159939       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:22:33.164634       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:33.165460       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:33.173860       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:22:33.173918       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:22:33.173959       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:22:33.173970       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:22:33.173977       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:22:33.177159       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:22:33.180436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [6d5e4d46c87453c01cee4d13fc2422303e9a061de4a51f4e61c977a7279d60ab] <==
	I1120 21:22:30.875448       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:22:30.969360       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:22:31.070209       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:22:31.070279       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1120 21:22:31.070376       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:22:31.090267       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:31.090329       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:22:31.095838       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:22:31.096282       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:22:31.096324       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:31.097667       1 config.go:200] "Starting service config controller"
	I1120 21:22:31.097706       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:22:31.097717       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:22:31.097729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:22:31.097802       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:22:31.097832       1 config.go:309] "Starting node config controller"
	I1120 21:22:31.097840       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:22:31.097838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:22:31.097847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:22:31.198050       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:22:31.198259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:22:31.198283       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4abdd4a141a63098bb8d46a5d73bdba1af24aa753f5ef315c232eaa7bc7a0605] <==
	I1120 21:22:28.406755       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:22:29.823599       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:22:29.823636       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:29.831263       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:22:29.831299       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:22:29.831302       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:29.831330       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:29.831420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:29.831435       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:29.831719       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:22:29.831815       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:22:29.932174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:29.932258       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:29.932312       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 20 21:22:36 no-preload-166874 kubelet[715]: I1120 21:22:36.499060     715 scope.go:117] "RemoveContainer" containerID="a0069d8b7ab256789648daae55a9a1614c7094b5fbe4cc3a780ac97cbf74e516"
	Nov 20 21:22:37 no-preload-166874 kubelet[715]: I1120 21:22:37.503789     715 scope.go:117] "RemoveContainer" containerID="a0069d8b7ab256789648daae55a9a1614c7094b5fbe4cc3a780ac97cbf74e516"
	Nov 20 21:22:37 no-preload-166874 kubelet[715]: I1120 21:22:37.504125     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:37 no-preload-166874 kubelet[715]: E1120 21:22:37.504371     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:38 no-preload-166874 kubelet[715]: I1120 21:22:38.507936     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:38 no-preload-166874 kubelet[715]: E1120 21:22:38.508127     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:40 no-preload-166874 kubelet[715]: I1120 21:22:40.571408     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:40 no-preload-166874 kubelet[715]: E1120 21:22:40.571627     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:44 no-preload-166874 kubelet[715]: I1120 21:22:44.167526     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nljn5" podStartSLOduration=4.33760598 podStartE2EDuration="11.167499125s" podCreationTimestamp="2025-11-20 21:22:33 +0000 UTC" firstStartedPulling="2025-11-20 21:22:33.954279727 +0000 UTC m=+6.592594948" lastFinishedPulling="2025-11-20 21:22:40.784172865 +0000 UTC m=+13.422488093" observedRunningTime="2025-11-20 21:22:41.52771218 +0000 UTC m=+14.166027414" watchObservedRunningTime="2025-11-20 21:22:44.167499125 +0000 UTC m=+16.805814360"
	Nov 20 21:22:51 no-preload-166874 kubelet[715]: I1120 21:22:51.454033     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:51 no-preload-166874 kubelet[715]: I1120 21:22:51.545617     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:22:51 no-preload-166874 kubelet[715]: E1120 21:22:51.545795     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:22:52 no-preload-166874 kubelet[715]: I1120 21:22:52.550179     715 scope.go:117] "RemoveContainer" containerID="bcb978637bd714de454488f84f8b41fa1f5a594412d3c9569e4fcf1b2ad3d53d"
	Nov 20 21:22:52 no-preload-166874 kubelet[715]: I1120 21:22:52.550452     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:22:52 no-preload-166874 kubelet[715]: E1120 21:22:52.550641     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:23:00 no-preload-166874 kubelet[715]: I1120 21:23:00.572341     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:23:00 no-preload-166874 kubelet[715]: E1120 21:23:00.572563     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: I1120 21:23:14.453442     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: I1120 21:23:14.618371     715 scope.go:117] "RemoveContainer" containerID="b914b6fa8b66d29d574e735f7e4604057d90656df7cf50d8d5c82d67ec37b6b1"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: I1120 21:23:14.618662     715 scope.go:117] "RemoveContainer" containerID="b3b90890c938a4a3a079c33897f6719a7043f7a7b69800b48e26ef2db13c7acc"
	Nov 20 21:23:14 no-preload-166874 kubelet[715]: E1120 21:23:14.618849     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hk6zc_kubernetes-dashboard(05f796e8-b3b4-4c14-a00c-9b708061b6ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hk6zc" podUID="05f796e8-b3b4-4c14-a00c-9b708061b6ae"
	Nov 20 21:23:18 no-preload-166874 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:18 no-preload-166874 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:18 no-preload-166874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:23:18 no-preload-166874 systemd[1]: kubelet.service: Consumed 1.741s CPU time.
	
	
	==> kubernetes-dashboard [a894cbc2c5d9a1366aa228439fd6e6836895cc4703f84abb78c459dd47ea9041] <==
	2025/11/20 21:22:40 Starting overwatch
	2025/11/20 21:22:40 Using namespace: kubernetes-dashboard
	2025/11/20 21:22:40 Using in-cluster config to connect to apiserver
	2025/11/20 21:22:40 Using secret token for csrf signing
	2025/11/20 21:22:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:22:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:22:40 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:22:40 Generating JWE encryption key
	2025/11/20 21:22:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:22:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:22:40 Initializing JWE encryption key from synchronized object
	2025/11/20 21:22:40 Creating in-cluster Sidecar client
	2025/11/20 21:22:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:41 Serving insecurely on HTTP port: 9090
	2025/11/20 21:23:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1075c3753d9c86ea762b5e73fac57de6dd495a8909c8d3e9513494941f62d1a9] <==
	I1120 21:22:30.840057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:22:30.845484       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b3875ad2b3649c5e0dc8dcfc3269750df1e05a74c2414af8cf9687dc24bbfecc] <==
	W1120 21:22:58.990113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:00.993510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:00.998042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:03.002725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:03.010963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:05.015125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:05.020305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:07.024433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:07.031262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:09.035017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:09.039775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:11.043586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:11.058375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:13.064884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:13.071914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:15.076319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:15.080740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:17.085633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:17.090008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:19.093564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:19.098563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.102492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.106984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:23.111729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:23.119237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-166874 -n no-preload-166874
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-166874 -n no-preload-166874: exit status 2 (369.880882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-166874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.195882ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-678421
E1120 21:23:35.907767  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect newest-cni-678421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14",
	        "Created": "2025-11-20T21:23:11.873210251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 572642,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:23:12.258290962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/hostname",
	        "HostsPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/hosts",
	        "LogPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14-json.log",
	        "Name": "/newest-cni-678421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-678421:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-678421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14",
	                "LowerDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-678421",
	                "Source": "/var/lib/docker/volumes/newest-cni-678421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-678421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-678421",
	                "name.minikube.sigs.k8s.io": "newest-cni-678421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "187c487baf718a0aa35f670e0a29898c2b5cc9d1fa0bad90c9fb29b06a680f0e",
	            "SandboxKey": "/var/run/docker/netns/187c487baf71",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-678421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "093eb702f51d45a3830e07e67ca1106b8fab033ac409a63fdd5ab62c257a2c9e",
	                    "EndpointID": "33651c162e2d5508a7dc88a1cbbbc4e576c2f46b69920500b1df98508fa6d329",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "a2:f9:78:33:ae:e0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-678421",
	                        "e821ad74a972"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-678421 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:21 UTC │
	│ start   │ -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:06.049245  571789 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:06.049501  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049513  571789 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:06.049519  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049841  571789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:06.050567  571789 out.go:368] Setting JSON to false
	I1120 21:23:06.052400  571789 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14728,"bootTime":1763659058,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:06.052535  571789 start.go:143] virtualization: kvm guest
	I1120 21:23:06.055111  571789 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:06.056602  571789 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:06.056605  571789 notify.go:221] Checking for updates...
	I1120 21:23:06.062930  571789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:06.067567  571789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:06.069232  571789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:06.070624  571789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:06.072902  571789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:06.074784  571789 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.074945  571789 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075081  571789 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075229  571789 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:06.120678  571789 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:06.120819  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.216315  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.199460321 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.216453  571789 docker.go:319] overlay module found
	I1120 21:23:06.218400  571789 out.go:179] * Using the docker driver based on user configuration
	I1120 21:23:06.219702  571789 start.go:309] selected driver: docker
	I1120 21:23:06.219714  571789 start.go:930] validating driver "docker" against <nil>
	I1120 21:23:06.219729  571789 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:06.220696  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.302193  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.29041782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.302376  571789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1120 21:23:06.302402  571789 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1120 21:23:06.302588  571789 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:06.305364  571789 out.go:179] * Using Docker driver with root privileges
	I1120 21:23:06.306728  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:06.306783  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:06.306792  571789 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:23:06.306891  571789 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:06.308307  571789 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:06.309596  571789 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:06.311056  571789 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:06.312309  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.312345  571789 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:06.312360  571789 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:06.312412  571789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:06.312479  571789 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:06.312494  571789 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:06.312653  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:06.312677  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json: {Name:mkf4f376b35371249315ca8102adde29558a901f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:06.340931  571789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:06.340959  571789 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:06.340975  571789 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:06.341010  571789 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:06.341132  571789 start.go:364] duration metric: took 97.864µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:06.341163  571789 start.go:93] Provisioning new machine with config: &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:06.341279  571789 start.go:125] createHost starting for "" (driver="docker")
	W1120 21:23:05.393230  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:07.891482  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	I1120 21:23:05.205163  567536 addons.go:515] duration metric: took 2.420707864s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:05.695398  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:05.702083  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:05.702112  567536 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:06.195506  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:06.201376  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 21:23:06.202743  567536 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:06.202819  567536 api_server.go:131] duration metric: took 1.008149378s to wait for apiserver health ...
	I1120 21:23:06.202844  567536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:06.209670  567536 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:06.209779  567536 system_pods.go:61] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.209798  567536 system_pods.go:61] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.209807  567536 system_pods.go:61] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.209817  567536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.209832  567536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.209838  567536 system_pods.go:61] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.209845  567536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.209856  567536 system_pods.go:61] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.209865  567536 system_pods.go:74] duration metric: took 7.010955ms to wait for pod list to return data ...
	I1120 21:23:06.209877  567536 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:06.215993  567536 default_sa.go:45] found service account: "default"
	I1120 21:23:06.216099  567536 default_sa.go:55] duration metric: took 6.211471ms for default service account to be created ...
	I1120 21:23:06.216167  567536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:23:06.219656  567536 system_pods.go:86] 8 kube-system pods found
	I1120 21:23:06.219693  567536 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.219715  567536 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.219722  567536 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.219731  567536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.219739  567536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.219745  567536 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.219754  567536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.219761  567536 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.219771  567536 system_pods.go:126] duration metric: took 3.576854ms to wait for k8s-apps to be running ...
	I1120 21:23:06.219780  567536 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:23:06.219827  567536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:06.242346  567536 system_svc.go:56] duration metric: took 22.555852ms WaitForService to wait for kubelet
	I1120 21:23:06.242379  567536 kubeadm.go:587] duration metric: took 3.45805481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:23:06.242401  567536 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:06.248588  567536 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:06.248623  567536 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:06.248641  567536 node_conditions.go:105] duration metric: took 6.233957ms to run NodePressure ...
	I1120 21:23:06.248657  567536 start.go:242] waiting for startup goroutines ...
	I1120 21:23:06.248666  567536 start.go:247] waiting for cluster config update ...
	I1120 21:23:06.248680  567536 start.go:256] writing updated cluster config ...
	I1120 21:23:06.249011  567536 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:06.254875  567536 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:06.260944  567536 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:23:08.267255  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:06.343254  571789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:23:06.343455  571789 start.go:159] libmachine.API.Create for "newest-cni-678421" (driver="docker")
	I1120 21:23:06.343482  571789 client.go:173] LocalClient.Create starting
	I1120 21:23:06.343553  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:23:06.343582  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343598  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.343655  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:23:06.343676  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343686  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.344001  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:23:06.362461  571789 cli_runner.go:211] docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:23:06.362549  571789 network_create.go:284] running [docker network inspect newest-cni-678421] to gather additional debugging logs...
	I1120 21:23:06.362568  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421
	W1120 21:23:06.383025  571789 cli_runner.go:211] docker network inspect newest-cni-678421 returned with exit code 1
	I1120 21:23:06.383064  571789 network_create.go:287] error running [docker network inspect newest-cni-678421]: docker network inspect newest-cni-678421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-678421 not found
	I1120 21:23:06.383078  571789 network_create.go:289] output of [docker network inspect newest-cni-678421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-678421 not found
	
	** /stderr **
	I1120 21:23:06.383171  571789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:06.403776  571789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:23:06.404546  571789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:23:06.405526  571789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:23:06.406341  571789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1ab433249a4f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:74:b3:0e:d4:91} reservation:<nil>}
	I1120 21:23:06.407123  571789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-4a91837c366f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:66:0c:88:d0:b5:58} reservation:<nil>}
	I1120 21:23:06.407767  571789 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6bf71dac4c7d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:29:7e:d9:60:3c} reservation:<nil>}
	I1120 21:23:06.408763  571789 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f00f50}
	I1120 21:23:06.408794  571789 network_create.go:124] attempt to create docker network newest-cni-678421 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1120 21:23:06.408864  571789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-678421 newest-cni-678421
	I1120 21:23:06.467067  571789 network_create.go:108] docker network newest-cni-678421 192.168.103.0/24 created
	I1120 21:23:06.467117  571789 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-678421" container
	I1120 21:23:06.467193  571789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:23:06.485312  571789 cli_runner.go:164] Run: docker volume create newest-cni-678421 --label name.minikube.sigs.k8s.io=newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:23:06.505057  571789 oci.go:103] Successfully created a docker volume newest-cni-678421
	I1120 21:23:06.505146  571789 cli_runner.go:164] Run: docker run --rm --name newest-cni-678421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --entrypoint /usr/bin/test -v newest-cni-678421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:23:06.958057  571789 oci.go:107] Successfully prepared a docker volume newest-cni-678421
	I1120 21:23:06.958140  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.958154  571789 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:23:06.958256  571789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1120 21:23:09.892319  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:11.894030  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:10.767056  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:12.767732  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:11.773995  571789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.815679837s)
	I1120 21:23:11.774033  571789 kic.go:203] duration metric: took 4.815876955s to extract preloaded images to volume ...
	W1120 21:23:11.774136  571789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:23:11.774185  571789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:23:11.774253  571789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:23:11.850339  571789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-678421 --name newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-678421 --network newest-cni-678421 --ip 192.168.103.2 --volume newest-cni-678421:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:23:12.533350  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Running}}
	I1120 21:23:12.555197  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.575307  571789 cli_runner.go:164] Run: docker exec newest-cni-678421 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:23:12.632671  571789 oci.go:144] the created container "newest-cni-678421" has a running status.
	I1120 21:23:12.632720  571789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa...
	I1120 21:23:12.863151  571789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:23:12.899100  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.920234  571789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:23:12.920260  571789 kic_runner.go:114] Args: [docker exec --privileged newest-cni-678421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:23:12.970999  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.993837  571789 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:12.993956  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.013867  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.014157  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.014178  571789 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:13.161308  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.161339  571789 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:13.161406  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.181829  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.182058  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.182073  571789 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:13.328927  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.329019  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.349098  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.349376  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.349398  571789 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:13.484139  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:13.484177  571789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:13.484259  571789 ubuntu.go:190] setting up certificates
	I1120 21:23:13.484275  571789 provision.go:84] configureAuth start
	I1120 21:23:13.484350  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:13.503703  571789 provision.go:143] copyHostCerts
	I1120 21:23:13.503779  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:13.503794  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:13.503883  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:13.504018  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:13.504032  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:13.504073  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:13.504158  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:13.504168  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:13.504202  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:13.504315  571789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:13.626916  571789 provision.go:177] copyRemoteCerts
	I1120 21:23:13.626988  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:13.627031  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.646188  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:13.742867  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:13.765755  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:13.787099  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:13.810322  571789 provision.go:87] duration metric: took 326.026448ms to configureAuth
	I1120 21:23:13.810353  571789 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:13.810568  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:13.810697  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.837968  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.838338  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.838366  571789 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:14.162945  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:14.162974  571789 machine.go:97] duration metric: took 1.169111697s to provisionDockerMachine
	I1120 21:23:14.162987  571789 client.go:176] duration metric: took 7.819496914s to LocalClient.Create
	I1120 21:23:14.163010  571789 start.go:167] duration metric: took 7.81955499s to libmachine.API.Create "newest-cni-678421"
	I1120 21:23:14.163019  571789 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:14.163030  571789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:14.163109  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:14.163159  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.187939  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.299873  571789 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:14.304403  571789 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:14.304436  571789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:14.304458  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:14.304511  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:14.304580  571789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:14.304666  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:14.315114  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:14.341203  571789 start.go:296] duration metric: took 178.161388ms for postStartSetup
	I1120 21:23:14.341644  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.364787  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:14.365126  571789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:14.365189  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.388501  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.491729  571789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:14.498714  571789 start.go:128] duration metric: took 8.157415645s to createHost
	I1120 21:23:14.498748  571789 start.go:83] releasing machines lock for "newest-cni-678421", held for 8.157600418s
	I1120 21:23:14.498845  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.524498  571789 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:14.524558  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.524576  571789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:14.524652  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.549686  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.550328  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.730932  571789 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:14.739895  571789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:14.789379  571789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:14.795855  571789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:14.795934  571789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:14.829432  571789 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:23:14.829462  571789 start.go:496] detecting cgroup driver to use...
	I1120 21:23:14.829510  571789 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:14.829589  571789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:14.851761  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:14.867809  571789 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:14.867934  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:14.892255  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:14.918730  571789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:15.037147  571789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:15.171533  571789 docker.go:234] disabling docker service ...
	I1120 21:23:15.171611  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:15.196938  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:15.214136  571789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:15.323780  571789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:15.444697  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:15.464324  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:15.484640  571789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:15.484705  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.499771  571789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:15.499842  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.512691  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.526079  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.538826  571789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:15.550121  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.562853  571789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.582104  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.595993  571789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:15.606890  571789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:15.617086  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:15.737596  571789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:16.600257  571789 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:16.600349  571789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:16.605892  571789 start.go:564] Will wait 60s for crictl version
	I1120 21:23:16.606027  571789 ssh_runner.go:195] Run: which crictl
	I1120 21:23:16.610690  571789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:16.637058  571789 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:16.637154  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.670116  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.704078  571789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:16.705267  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:16.724295  571789 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:16.728925  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.741905  571789 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1120 21:23:14.392714  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:16.891564  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:15.268024  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:17.768172  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:16.742987  571789 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:16.743128  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:16.743179  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.780101  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.780125  571789 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:16.780172  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.809837  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.809872  571789 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:16.809883  571789 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:16.810002  571789 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:16.810090  571789 ssh_runner.go:195] Run: crio config
	I1120 21:23:16.863639  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:16.863659  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:16.863681  571789 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:16.863704  571789 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:16.863822  571789 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:16.863884  571789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:16.873403  571789 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:16.873494  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:16.881985  571789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:16.896085  571789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:16.913519  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:16.928859  571789 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:16.933334  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.945027  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:17.031776  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:17.058982  571789 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:17.059010  571789 certs.go:195] generating shared ca certs ...
	I1120 21:23:17.059029  571789 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.059186  571789 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:17.059248  571789 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:17.059262  571789 certs.go:257] generating profile certs ...
	I1120 21:23:17.059323  571789 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:17.059344  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt with IP's: []
	I1120 21:23:17.213357  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt ...
	I1120 21:23:17.213389  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt: {Name:mke2db14d5c940e88a112fbde2b7f7a5c236c264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213571  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key ...
	I1120 21:23:17.213582  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key: {Name:mk64627472328d961f5d0acc5bb1ae55a18c598e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213666  571789 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:17.213689  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1120 21:23:17.465354  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb ...
	I1120 21:23:17.465382  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb: {Name:mk1f657111bdac9ee1dbd7f52b9080823e78b0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465538  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb ...
	I1120 21:23:17.465551  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb: {Name:mk0b65e76824a55204f187e73dc35407cb7853bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465624  571789 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt
	I1120 21:23:17.465704  571789 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key
	I1120 21:23:17.465758  571789 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:17.465775  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt with IP's: []
	I1120 21:23:17.786236  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt ...
	I1120 21:23:17.786271  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt: {Name:mkf64e7d9fa7e272a656caab1db35f0d50079c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.786461  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key ...
	I1120 21:23:17.786477  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key: {Name:mkadbe10d3a0cb1e1581b893a1e5760fc272fd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.787184  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:17.787274  571789 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:17.787292  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:17.787316  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:17.787339  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:17.787359  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:17.787408  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:17.788027  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:17.809571  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:17.829725  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:17.850042  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:17.870161  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:17.891028  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:17.910446  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:17.930120  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:17.949077  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:17.975331  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:17.995043  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:18.013730  571789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:18.027267  571789 ssh_runner.go:195] Run: openssl version
	I1120 21:23:18.033999  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.042006  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:18.049852  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053838  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053894  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.092857  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:18.101344  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:23:18.109957  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.119032  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:18.127682  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132138  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132200  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.181104  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:18.189712  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/254094.pem /etc/ssl/certs/51391683.0
	I1120 21:23:18.198524  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.206158  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:18.213580  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217316  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217376  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.254832  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.263424  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2540942.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.272052  571789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:18.276149  571789 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:23:18.276225  571789 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:18.276317  571789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:18.276376  571789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:18.305337  571789 cri.go:89] found id: ""
	I1120 21:23:18.305409  571789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:18.314096  571789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:23:18.322873  571789 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:23:18.322928  571789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:23:18.331021  571789 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:23:18.331048  571789 kubeadm.go:158] found existing configuration files:
	
	I1120 21:23:18.331102  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:23:18.338959  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:23:18.339007  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:23:18.346732  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:23:18.354398  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:23:18.354456  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:23:18.361888  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.370477  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:23:18.370533  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.378355  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:23:18.387242  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:23:18.387302  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:23:18.397935  571789 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:23:18.476910  571789 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 21:23:18.555544  571789 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1120 21:23:19.391574  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:21.392350  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:23.892432  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:20.268264  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:22.768335  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:25.391690  564741 pod_ready.go:94] pod "coredns-66bc5c9577-g47lf" is "Ready"
	I1120 21:23:25.391743  564741 pod_ready.go:86] duration metric: took 35.50597602s for pod "coredns-66bc5c9577-g47lf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.394978  564741 pod_ready.go:83] waiting for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.399661  564741 pod_ready.go:94] pod "etcd-embed-certs-714571" is "Ready"
	I1120 21:23:25.399686  564741 pod_ready.go:86] duration metric: took 4.680651ms for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.402021  564741 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.405973  564741 pod_ready.go:94] pod "kube-apiserver-embed-certs-714571" is "Ready"
	I1120 21:23:25.405997  564741 pod_ready.go:86] duration metric: took 3.949841ms for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.407841  564741 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.589324  564741 pod_ready.go:94] pod "kube-controller-manager-embed-certs-714571" is "Ready"
	I1120 21:23:25.589354  564741 pod_ready.go:86] duration metric: took 181.489846ms for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.789548  564741 pod_ready.go:83] waiting for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.190409  564741 pod_ready.go:94] pod "kube-proxy-nlj6n" is "Ready"
	I1120 21:23:26.190444  564741 pod_ready.go:86] duration metric: took 400.867423ms for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.390084  564741 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.789386  564741 pod_ready.go:94] pod "kube-scheduler-embed-certs-714571" is "Ready"
	I1120 21:23:26.789415  564741 pod_ready.go:86] duration metric: took 399.299576ms for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.789427  564741 pod_ready.go:40] duration metric: took 36.907183518s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:26.838827  564741 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:26.847832  564741 out.go:179] * Done! kubectl is now configured to use "embed-certs-714571" cluster and "default" namespace by default
	W1120 21:23:25.269305  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:27.766813  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:29.313350  571789 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:23:29.313459  571789 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:23:29.313610  571789 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:23:29.313681  571789 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 21:23:29.313746  571789 kubeadm.go:319] OS: Linux
	I1120 21:23:29.313822  571789 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:23:29.313901  571789 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:23:29.313981  571789 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:23:29.314064  571789 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:23:29.314133  571789 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:23:29.314196  571789 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:23:29.314321  571789 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:23:29.314392  571789 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 21:23:29.314498  571789 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:23:29.314637  571789 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:23:29.314764  571789 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:23:29.314845  571789 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:23:29.316777  571789 out.go:252]   - Generating certificates and keys ...
	I1120 21:23:29.316887  571789 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:23:29.316965  571789 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:23:29.317057  571789 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:23:29.317139  571789 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:23:29.317270  571789 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:23:29.317353  571789 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:23:29.317420  571789 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:23:29.317573  571789 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-678421] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1120 21:23:29.317651  571789 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:23:29.317860  571789 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-678421] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1120 21:23:29.317969  571789 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:23:29.318061  571789 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:23:29.318157  571789 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:23:29.318279  571789 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:23:29.318369  571789 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:23:29.318477  571789 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:23:29.318544  571789 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:23:29.318663  571789 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:23:29.318746  571789 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:23:29.318856  571789 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:23:29.318951  571789 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:23:29.320271  571789 out.go:252]   - Booting up control plane ...
	I1120 21:23:29.320352  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:23:29.320414  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:23:29.320475  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:23:29.320580  571789 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:23:29.320662  571789 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:23:29.320749  571789 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:23:29.320816  571789 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:23:29.320848  571789 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:23:29.320957  571789 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:23:29.321044  571789 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:23:29.321097  571789 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.080831ms
	I1120 21:23:29.321173  571789 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:23:29.321275  571789 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1120 21:23:29.321347  571789 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:23:29.321409  571789 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:23:29.321471  571789 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.185451221s
	I1120 21:23:29.321533  571789 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.636543543s
	I1120 21:23:29.321592  571789 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002088865s
	I1120 21:23:29.321680  571789 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:23:29.321892  571789 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:23:29.321990  571789 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:23:29.322312  571789 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-678421 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:23:29.322381  571789 kubeadm.go:319] [bootstrap-token] Using token: bgtwzb.1jmxu7h8xrihsar6
	I1120 21:23:29.324385  571789 out.go:252]   - Configuring RBAC rules ...
	I1120 21:23:29.324482  571789 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:23:29.324564  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:23:29.324693  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:23:29.324827  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:23:29.324923  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:23:29.324996  571789 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:23:29.325094  571789 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:23:29.325134  571789 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:23:29.325175  571789 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:23:29.325181  571789 kubeadm.go:319] 
	I1120 21:23:29.325253  571789 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:23:29.325263  571789 kubeadm.go:319] 
	I1120 21:23:29.325343  571789 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:23:29.325356  571789 kubeadm.go:319] 
	I1120 21:23:29.325382  571789 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:23:29.325434  571789 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:23:29.325475  571789 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:23:29.325480  571789 kubeadm.go:319] 
	I1120 21:23:29.325522  571789 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:23:29.325529  571789 kubeadm.go:319] 
	I1120 21:23:29.325566  571789 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:23:29.325571  571789 kubeadm.go:319] 
	I1120 21:23:29.325614  571789 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:23:29.325680  571789 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:23:29.325744  571789 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:23:29.325751  571789 kubeadm.go:319] 
	I1120 21:23:29.325875  571789 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:23:29.325954  571789 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:23:29.325961  571789 kubeadm.go:319] 
	I1120 21:23:29.326041  571789 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bgtwzb.1jmxu7h8xrihsar6 \
	I1120 21:23:29.326142  571789 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d \
	I1120 21:23:29.326163  571789 kubeadm.go:319] 	--control-plane 
	I1120 21:23:29.326170  571789 kubeadm.go:319] 
	I1120 21:23:29.326275  571789 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:23:29.326284  571789 kubeadm.go:319] 
	I1120 21:23:29.326350  571789 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bgtwzb.1jmxu7h8xrihsar6 \
	I1120 21:23:29.326438  571789 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d 
	I1120 21:23:29.326473  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:29.326482  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:29.327679  571789 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:23:29.328583  571789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:23:29.332973  571789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:23:29.332989  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:23:29.346147  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:23:29.573681  571789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:23:29.573755  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:29.573814  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-678421 minikube.k8s.io/updated_at=2025_11_20T21_23_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=newest-cni-678421 minikube.k8s.io/primary=true
	I1120 21:23:29.667824  571789 ops.go:34] apiserver oom_adj: -16
	I1120 21:23:29.667832  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:30.168146  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:30.667910  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 21:23:30.266667  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:32.767122  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:31.168579  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:31.668019  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:32.168742  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:32.668690  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:33.168318  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:33.668957  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.168656  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.668775  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.751348  571789 kubeadm.go:1114] duration metric: took 5.177641354s to wait for elevateKubeSystemPrivileges
	I1120 21:23:34.751396  571789 kubeadm.go:403] duration metric: took 16.475185755s to StartCluster
	I1120 21:23:34.751420  571789 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:34.751503  571789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:34.753522  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:34.753817  571789 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:34.753838  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:23:34.753968  571789 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:34.754086  571789 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:34.754105  571789 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	I1120 21:23:34.754113  571789 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:34.754136  571789 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:34.754148  571789 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:34.754313  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:34.754557  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.754792  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.756687  571789 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:34.759774  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:34.786514  571789 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:34.787803  571789 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:34.787913  571789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:34.788029  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:34.790763  571789 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	I1120 21:23:34.790812  571789 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:34.791301  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.823476  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:34.825705  571789 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:34.825731  571789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:34.825787  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:34.852807  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:34.872624  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:23:34.937278  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:34.942585  571789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:34.968135  571789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:35.062180  571789 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1120 21:23:35.063734  571789 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:35.063787  571789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:35.251967  571789 api_server.go:72] duration metric: took 498.105523ms to wait for apiserver process to appear ...
	I1120 21:23:35.252003  571789 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:35.252029  571789 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:35.257634  571789 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:35.258651  571789 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:35.258686  571789 api_server.go:131] duration metric: took 6.676193ms to wait for apiserver health ...
	I1120 21:23:35.258695  571789 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:35.258787  571789 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:23:35.260410  571789 addons.go:515] duration metric: took 506.448699ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:23:35.261510  571789 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:35.261546  571789 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:35.261567  571789 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:35.261587  571789 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:35.261597  571789 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:35.261608  571789 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:35.261621  571789 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:35.261629  571789 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:35.261638  571789 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:35.261647  571789 system_pods.go:74] duration metric: took 2.944734ms to wait for pod list to return data ...
	I1120 21:23:35.261657  571789 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:35.264365  571789 default_sa.go:45] found service account: "default"
	I1120 21:23:35.264386  571789 default_sa.go:55] duration metric: took 2.722096ms for default service account to be created ...
	I1120 21:23:35.264399  571789 kubeadm.go:587] duration metric: took 510.545674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:35.264416  571789 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:35.266974  571789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:35.267006  571789 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:35.267024  571789 node_conditions.go:105] duration metric: took 2.601571ms to run NodePressure ...
	I1120 21:23:35.267040  571789 start.go:242] waiting for startup goroutines ...
	I1120 21:23:35.567181  571789 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-678421" context rescaled to 1 replicas
	I1120 21:23:35.567240  571789 start.go:247] waiting for cluster config update ...
	I1120 21:23:35.567257  571789 start.go:256] writing updated cluster config ...
	I1120 21:23:35.567561  571789 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:35.624092  571789 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:35.625572  571789 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.682570184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.686036797Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7dfe7f92-8789-413f-9230-675d13ffb1a5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.686341973Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f92bf258-1e52-47d4-bd0c-208fe3c19a4a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.687665048Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.688104816Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.688353283Z" level=info msg="Ran pod sandbox 651c17ada914000efc5dd86ac04d8e2dc6cf2c7f32d8d841209fa8ccee0ff9a0 with infra container: kube-system/kube-proxy-t5jmf/POD" id=7dfe7f92-8789-413f-9230-675d13ffb1a5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.688899072Z" level=info msg="Ran pod sandbox e85c1d528b2f9bf60c921ca24c3ce5dfee719621d1edf0043a88c7d1b7b1a18b with infra container: kube-system/kindnet-454t9/POD" id=f92bf258-1e52-47d4-bd0c-208fe3c19a4a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.689649899Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=96663054-3050-440f-b3e5-29993bbdab39 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.690029851Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b00aef98-36a7-442c-a23a-0401ec7d68fc name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.690705278Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=df8b7a28-92a3-4bde-881e-2389885165a4 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.690863487Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3f9b2dbc-5ce8-47f5-bc7e-02cddd7d0de8 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.694825604Z" level=info msg="Creating container: kube-system/kube-proxy-t5jmf/kube-proxy" id=e7f5ceaa-50ab-47b6-8692-5bed43e3dcbc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.694938664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.696370774Z" level=info msg="Creating container: kube-system/kindnet-454t9/kindnet-cni" id=cf4335c1-c116-4d44-bae3-c232a8302169 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.696477162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.701426003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.702074895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.702743515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.703123361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.732554791Z" level=info msg="Created container c7e703ca60339f853fcff730d53e599f2d4c4f7414b83a356b0101c193f7f39c: kube-system/kindnet-454t9/kindnet-cni" id=cf4335c1-c116-4d44-bae3-c232a8302169 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.733710335Z" level=info msg="Starting container: c7e703ca60339f853fcff730d53e599f2d4c4f7414b83a356b0101c193f7f39c" id=41719e6d-a549-4b6a-ac4a-8f3b23b9e694 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.734337065Z" level=info msg="Created container 5b2953f27b4cd57166a483e42aa74b76b985a379366a2cfef1e179390d21e3c4: kube-system/kube-proxy-t5jmf/kube-proxy" id=e7f5ceaa-50ab-47b6-8692-5bed43e3dcbc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.734924682Z" level=info msg="Starting container: 5b2953f27b4cd57166a483e42aa74b76b985a379366a2cfef1e179390d21e3c4" id=8ba44a38-f353-4a08-844b-28d187164af7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.736855195Z" level=info msg="Started container" PID=1561 containerID=c7e703ca60339f853fcff730d53e599f2d4c4f7414b83a356b0101c193f7f39c description=kube-system/kindnet-454t9/kindnet-cni id=41719e6d-a549-4b6a-ac4a-8f3b23b9e694 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e85c1d528b2f9bf60c921ca24c3ce5dfee719621d1edf0043a88c7d1b7b1a18b
	Nov 20 21:23:34 newest-cni-678421 crio[779]: time="2025-11-20T21:23:34.738585552Z" level=info msg="Started container" PID=1562 containerID=5b2953f27b4cd57166a483e42aa74b76b985a379366a2cfef1e179390d21e3c4 description=kube-system/kube-proxy-t5jmf/kube-proxy id=8ba44a38-f353-4a08-844b-28d187164af7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=651c17ada914000efc5dd86ac04d8e2dc6cf2c7f32d8d841209fa8ccee0ff9a0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c7e703ca60339       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   e85c1d528b2f9       kindnet-454t9                               kube-system
	5b2953f27b4cd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   651c17ada9140       kube-proxy-t5jmf                            kube-system
	bfd84b0ec5ac6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   3061ab143aae9       etcd-newest-cni-678421                      kube-system
	4e4be75bf051c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   9d93e554aa1c4       kube-apiserver-newest-cni-678421            kube-system
	a210b0c99b91a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   ee792708c7a2c       kube-scheduler-newest-cni-678421            kube-system
	69f9bb8e08bd9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   0e587c4b56d8d       kube-controller-manager-newest-cni-678421   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-678421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-678421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=newest-cni-678421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_23_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:23:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-678421
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:28 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:28 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:28 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 20 Nov 2025 21:23:28 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-678421
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                aea61b33-8516-4da2-aaf9-1fdf3bc040c2
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-678421                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-454t9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-678421             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-678421    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-t5jmf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-678421             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-678421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-678421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-678421 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-678421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-678421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-678421 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-678421 event: Registered Node newest-cni-678421 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [bfd84b0ec5ac6517b772eb35c255e5e3ebe84619012d12393a240a2706f1a186] <==
	{"level":"warn","ts":"2025-11-20T21:23:25.303150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.311548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.319358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.325748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.333126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.340353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.346688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.354416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.361159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.372391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.378567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.385398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.392845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.401541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.409311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.416891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.424131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.431899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.439261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.446402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.452901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.467882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.476828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.484384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:25.549721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:23:37 up  4:05,  0 user,  load average: 3.99, 4.55, 2.96
	Linux newest-cni-678421 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c7e703ca60339f853fcff730d53e599f2d4c4f7414b83a356b0101c193f7f39c] <==
	I1120 21:23:34.909148       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:23:34.909386       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1120 21:23:34.909523       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:23:34.909547       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:23:34.909578       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:23:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:23:35.202851       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:23:35.202876       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:23:35.202890       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:23:35.203850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:23:35.502959       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:23:35.502986       1 metrics.go:72] Registering metrics
	I1120 21:23:35.503035       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [4e4be75bf051c7077edf2099e27712a08dd5b32ca5c2401d6fd4632efd8d31ff] <==
	I1120 21:23:26.146003       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:23:26.146036       1 policy_source.go:240] refreshing policies
	I1120 21:23:26.182422       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:23:26.187899       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:23:26.188579       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1120 21:23:26.199350       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:23:26.199948       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:23:26.320713       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:23:26.985670       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1120 21:23:26.989679       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1120 21:23:26.989704       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:23:27.515005       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:23:27.555618       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:23:27.690512       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1120 21:23:27.696552       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1120 21:23:27.697518       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:23:27.701284       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:23:28.566016       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:23:28.713407       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:23:28.723296       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1120 21:23:28.732050       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1120 21:23:34.263244       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:23:34.267793       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:23:34.360207       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1120 21:23:34.462104       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [69f9bb8e08bd90bf611621c7c1decae91245d4a60cb2e3340989ff2e882ef318] <==
	I1120 21:23:33.524728       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:23:33.531110       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:23:33.537323       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1120 21:23:33.545687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:23:33.552010       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:23:33.557566       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:23:33.557589       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:23:33.557596       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:23:33.557602       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:23:33.557682       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:23:33.557999       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:23:33.558142       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1120 21:23:33.558251       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:23:33.558275       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1120 21:23:33.558996       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1120 21:23:33.559028       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1120 21:23:33.559110       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1120 21:23:33.559120       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:23:33.559117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:23:33.559351       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:23:33.559409       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:23:33.561380       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:23:33.561944       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1120 21:23:33.563087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:23:33.580398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5b2953f27b4cd57166a483e42aa74b76b985a379366a2cfef1e179390d21e3c4] <==
	I1120 21:23:34.794376       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:23:34.876557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:23:34.977881       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:23:34.977931       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1120 21:23:34.978067       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:23:35.003130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:23:35.003209       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:23:35.009995       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:23:35.010698       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:23:35.010792       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:35.014429       1 config.go:309] "Starting node config controller"
	I1120 21:23:35.014459       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:23:35.014468       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:23:35.014776       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:23:35.014787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:23:35.014893       1 config.go:200] "Starting service config controller"
	I1120 21:23:35.014901       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:23:35.014918       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:23:35.014923       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:23:35.115705       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:23:35.115774       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:23:35.115801       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a210b0c99b91a41e8c8f0eae8b7ccfb208f062888735d7de8100ed2c5286040f] <==
	E1120 21:23:26.076858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1120 21:23:26.076853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:23:26.077084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1120 21:23:26.077098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1120 21:23:26.077212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1120 21:23:26.077268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1120 21:23:26.077297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:23:26.077376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1120 21:23:26.077416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1120 21:23:26.077429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:23:26.077470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1120 21:23:26.077479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:23:26.077595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1120 21:23:26.080060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:23:26.080154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1120 21:23:26.080238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1120 21:23:26.080361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1120 21:23:27.036053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1120 21:23:27.072093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1120 21:23:27.145905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1120 21:23:27.198680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1120 21:23:27.208082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1120 21:23:27.280778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1120 21:23:27.311020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1120 21:23:30.073750       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:23:28 newest-cni-678421 kubelet[1354]: I1120 21:23:28.737662    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b550e359109c79d888541a788a0f281-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-678421\" (UID: \"6b550e359109c79d888541a788a0f281\") " pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:28 newest-cni-678421 kubelet[1354]: I1120 21:23:28.737683    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/46e79d01be8d4434f3688d83b2dafaf3-etcd-certs\") pod \"etcd-newest-cni-678421\" (UID: \"46e79d01be8d4434f3688d83b2dafaf3\") " pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:28 newest-cni-678421 kubelet[1354]: I1120 21:23:28.737735    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61de524a126da7ee6f9b7302f99bf6e0-ca-certs\") pod \"kube-apiserver-newest-cni-678421\" (UID: \"61de524a126da7ee6f9b7302f99bf6e0\") " pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.531372    1354 apiserver.go:52] "Watching apiserver"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.536567    1354 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.575166    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-678421" podStartSLOduration=1.575144217 podStartE2EDuration="1.575144217s" podCreationTimestamp="2025-11-20 21:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:23:29.565600651 +0000 UTC m=+1.095094203" watchObservedRunningTime="2025-11-20 21:23:29.575144217 +0000 UTC m=+1.104637758"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.584763    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-678421" podStartSLOduration=1.584702524 podStartE2EDuration="1.584702524s" podCreationTimestamp="2025-11-20 21:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:23:29.575060316 +0000 UTC m=+1.104553867" watchObservedRunningTime="2025-11-20 21:23:29.584702524 +0000 UTC m=+1.114196075"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.591466    1354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.591540    1354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.595136    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-678421" podStartSLOduration=1.595108982 podStartE2EDuration="1.595108982s" podCreationTimestamp="2025-11-20 21:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:23:29.584953057 +0000 UTC m=+1.114446607" watchObservedRunningTime="2025-11-20 21:23:29.595108982 +0000 UTC m=+1.124602535"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: I1120 21:23:29.595372    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-678421" podStartSLOduration=1.595361049 podStartE2EDuration="1.595361049s" podCreationTimestamp="2025-11-20 21:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:23:29.595256403 +0000 UTC m=+1.124749956" watchObservedRunningTime="2025-11-20 21:23:29.595361049 +0000 UTC m=+1.124854602"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: E1120 21:23:29.599428    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-678421\" already exists" pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:29 newest-cni-678421 kubelet[1354]: E1120 21:23:29.600434    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-678421\" already exists" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:33 newest-cni-678421 kubelet[1354]: I1120 21:23:33.612131    1354 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 20 21:23:33 newest-cni-678421 kubelet[1354]: I1120 21:23:33.613279    1354 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483441    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-xtables-lock\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483502    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57nf\" (UniqueName: \"kubernetes.io/projected/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-kube-api-access-m57nf\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483531    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b0f18f-00f6-4f9c-9554-0054d1da612b-xtables-lock\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483558    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-cni-cfg\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483588    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-lib-modules\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483614    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15b0f18f-00f6-4f9c-9554-0054d1da612b-kube-proxy\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483634    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b0f18f-00f6-4f9c-9554-0054d1da612b-lib-modules\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:34 newest-cni-678421 kubelet[1354]: I1120 21:23:34.483654    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgpk\" (UniqueName: \"kubernetes.io/projected/15b0f18f-00f6-4f9c-9554-0054d1da612b-kube-api-access-9xgpk\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:35 newest-cni-678421 kubelet[1354]: I1120 21:23:35.629390    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5jmf" podStartSLOduration=1.6293687669999999 podStartE2EDuration="1.629368767s" podCreationTimestamp="2025-11-20 21:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:23:35.618947872 +0000 UTC m=+7.148441424" watchObservedRunningTime="2025-11-20 21:23:35.629368767 +0000 UTC m=+7.158862318"
	Nov 20 21:23:35 newest-cni-678421 kubelet[1354]: I1120 21:23:35.629528    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-454t9" podStartSLOduration=1.629518999 podStartE2EDuration="1.629518999s" podCreationTimestamp="2025-11-20 21:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-20 21:23:35.629338309 +0000 UTC m=+7.158831862" watchObservedRunningTime="2025-11-20 21:23:35.629518999 +0000 UTC m=+7.159012551"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-678421 -n newest-cni-678421
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-678421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6kdrd storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner: exit status 1 (62.486501ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6kdrd" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-714571 --alsologtostderr -v=1
E1120 21:23:39.309030  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.315485  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.327670  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.349672  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.391318  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.472812  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.634793  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:39.957002  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:40.598658  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-714571 --alsologtostderr -v=1: exit status 80 (2.47930154s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-714571 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:23:38.633507  578224 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:38.633696  578224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:38.633712  578224 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:38.633717  578224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:38.633916  578224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:38.634150  578224 out.go:368] Setting JSON to false
	I1120 21:23:38.634202  578224 mustload.go:66] Loading cluster: embed-certs-714571
	I1120 21:23:38.634567  578224 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:38.635001  578224 cli_runner.go:164] Run: docker container inspect embed-certs-714571 --format={{.State.Status}}
	I1120 21:23:38.654569  578224 host.go:66] Checking if "embed-certs-714571" exists ...
	I1120 21:23:38.654847  578224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:38.711632  578224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-20 21:23:38.701806524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:38.712312  578224 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-714571 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 21:23:38.714255  578224 out.go:179] * Pausing node embed-certs-714571 ... 
	I1120 21:23:38.715426  578224 host.go:66] Checking if "embed-certs-714571" exists ...
	I1120 21:23:38.715707  578224 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:38.715753  578224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-714571
	I1120 21:23:38.734962  578224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/embed-certs-714571/id_rsa Username:docker}
	I1120 21:23:38.831394  578224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:38.844353  578224 pause.go:52] kubelet running: true
	I1120 21:23:38.844425  578224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:38.997085  578224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:38.997165  578224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:39.074491  578224 cri.go:89] found id: "a23c77cfc3824223b4a55b4a5e91995ce7c373240ebbf5f42c31acc96ffafd62"
	I1120 21:23:39.074523  578224 cri.go:89] found id: "24cb3553d837b9cd3b519b1d4abd2ebf5e29c85691820679474be2cb679bdd30"
	I1120 21:23:39.074531  578224 cri.go:89] found id: "7711d5f53716a82a70b54cb8d5a82ef958fcea1ff3f62034d3762b2fb069314a"
	I1120 21:23:39.074535  578224 cri.go:89] found id: "eb70c0bf6966de6944714ef82d994081eb2a5388ae7b02f4dde2b864a14d3f45"
	I1120 21:23:39.074540  578224 cri.go:89] found id: "a71da522ea0a7688f16bc3aa91232370fb2211f6549b85cab0f482152d953d06"
	I1120 21:23:39.074545  578224 cri.go:89] found id: "211d625d3d512414e8503f451d8f2d5b09a473bea4d6cea10654872ca03ed28c"
	I1120 21:23:39.074549  578224 cri.go:89] found id: "1fb52640b776a50023e67e558d7cb269726d7e01003d1467c20bd70139dad7d0"
	I1120 21:23:39.074553  578224 cri.go:89] found id: "037a8b45fa83df6929315e2b0cfb4dec7b265ac732145109a7a92495ce7c7f37"
	I1120 21:23:39.074557  578224 cri.go:89] found id: "e73953c845da89581a4014650b3b48e09e190cf5fd1bae761b09dc8bee64105b"
	I1120 21:23:39.074571  578224 cri.go:89] found id: "0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	I1120 21:23:39.074574  578224 cri.go:89] found id: "369fc37c64c4ba9c20d07c36ceb3590cd05d4d24197202d054e34de8c1658b85"
	I1120 21:23:39.074577  578224 cri.go:89] found id: ""
	I1120 21:23:39.074617  578224 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:39.087755  578224 retry.go:31] will retry after 258.771603ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:39Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:39.347360  578224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:39.361952  578224 pause.go:52] kubelet running: false
	I1120 21:23:39.362012  578224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:39.502601  578224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:39.502718  578224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:39.575162  578224 cri.go:89] found id: "a23c77cfc3824223b4a55b4a5e91995ce7c373240ebbf5f42c31acc96ffafd62"
	I1120 21:23:39.575197  578224 cri.go:89] found id: "24cb3553d837b9cd3b519b1d4abd2ebf5e29c85691820679474be2cb679bdd30"
	I1120 21:23:39.575204  578224 cri.go:89] found id: "7711d5f53716a82a70b54cb8d5a82ef958fcea1ff3f62034d3762b2fb069314a"
	I1120 21:23:39.575209  578224 cri.go:89] found id: "eb70c0bf6966de6944714ef82d994081eb2a5388ae7b02f4dde2b864a14d3f45"
	I1120 21:23:39.575240  578224 cri.go:89] found id: "a71da522ea0a7688f16bc3aa91232370fb2211f6549b85cab0f482152d953d06"
	I1120 21:23:39.575248  578224 cri.go:89] found id: "211d625d3d512414e8503f451d8f2d5b09a473bea4d6cea10654872ca03ed28c"
	I1120 21:23:39.575256  578224 cri.go:89] found id: "1fb52640b776a50023e67e558d7cb269726d7e01003d1467c20bd70139dad7d0"
	I1120 21:23:39.575261  578224 cri.go:89] found id: "037a8b45fa83df6929315e2b0cfb4dec7b265ac732145109a7a92495ce7c7f37"
	I1120 21:23:39.575268  578224 cri.go:89] found id: "e73953c845da89581a4014650b3b48e09e190cf5fd1bae761b09dc8bee64105b"
	I1120 21:23:39.575288  578224 cri.go:89] found id: "0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	I1120 21:23:39.575295  578224 cri.go:89] found id: "369fc37c64c4ba9c20d07c36ceb3590cd05d4d24197202d054e34de8c1658b85"
	I1120 21:23:39.575299  578224 cri.go:89] found id: ""
	I1120 21:23:39.575359  578224 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:39.587435  578224 retry.go:31] will retry after 415.903178ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:39Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:40.004129  578224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:40.028299  578224 pause.go:52] kubelet running: false
	I1120 21:23:40.028365  578224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:40.164844  578224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:40.164952  578224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:40.233205  578224 cri.go:89] found id: "a23c77cfc3824223b4a55b4a5e91995ce7c373240ebbf5f42c31acc96ffafd62"
	I1120 21:23:40.233258  578224 cri.go:89] found id: "24cb3553d837b9cd3b519b1d4abd2ebf5e29c85691820679474be2cb679bdd30"
	I1120 21:23:40.233262  578224 cri.go:89] found id: "7711d5f53716a82a70b54cb8d5a82ef958fcea1ff3f62034d3762b2fb069314a"
	I1120 21:23:40.233265  578224 cri.go:89] found id: "eb70c0bf6966de6944714ef82d994081eb2a5388ae7b02f4dde2b864a14d3f45"
	I1120 21:23:40.233268  578224 cri.go:89] found id: "a71da522ea0a7688f16bc3aa91232370fb2211f6549b85cab0f482152d953d06"
	I1120 21:23:40.233272  578224 cri.go:89] found id: "211d625d3d512414e8503f451d8f2d5b09a473bea4d6cea10654872ca03ed28c"
	I1120 21:23:40.233275  578224 cri.go:89] found id: "1fb52640b776a50023e67e558d7cb269726d7e01003d1467c20bd70139dad7d0"
	I1120 21:23:40.233278  578224 cri.go:89] found id: "037a8b45fa83df6929315e2b0cfb4dec7b265ac732145109a7a92495ce7c7f37"
	I1120 21:23:40.233289  578224 cri.go:89] found id: "e73953c845da89581a4014650b3b48e09e190cf5fd1bae761b09dc8bee64105b"
	I1120 21:23:40.233296  578224 cri.go:89] found id: "0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	I1120 21:23:40.233299  578224 cri.go:89] found id: "369fc37c64c4ba9c20d07c36ceb3590cd05d4d24197202d054e34de8c1658b85"
	I1120 21:23:40.233301  578224 cri.go:89] found id: ""
	I1120 21:23:40.233341  578224 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:40.246010  578224 retry.go:31] will retry after 555.303503ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:40Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:40.801585  578224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:40.815385  578224 pause.go:52] kubelet running: false
	I1120 21:23:40.815451  578224 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:40.954435  578224 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:40.954518  578224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:41.026964  578224 cri.go:89] found id: "a23c77cfc3824223b4a55b4a5e91995ce7c373240ebbf5f42c31acc96ffafd62"
	I1120 21:23:41.026988  578224 cri.go:89] found id: "24cb3553d837b9cd3b519b1d4abd2ebf5e29c85691820679474be2cb679bdd30"
	I1120 21:23:41.026992  578224 cri.go:89] found id: "7711d5f53716a82a70b54cb8d5a82ef958fcea1ff3f62034d3762b2fb069314a"
	I1120 21:23:41.026995  578224 cri.go:89] found id: "eb70c0bf6966de6944714ef82d994081eb2a5388ae7b02f4dde2b864a14d3f45"
	I1120 21:23:41.026998  578224 cri.go:89] found id: "a71da522ea0a7688f16bc3aa91232370fb2211f6549b85cab0f482152d953d06"
	I1120 21:23:41.027002  578224 cri.go:89] found id: "211d625d3d512414e8503f451d8f2d5b09a473bea4d6cea10654872ca03ed28c"
	I1120 21:23:41.027005  578224 cri.go:89] found id: "1fb52640b776a50023e67e558d7cb269726d7e01003d1467c20bd70139dad7d0"
	I1120 21:23:41.027007  578224 cri.go:89] found id: "037a8b45fa83df6929315e2b0cfb4dec7b265ac732145109a7a92495ce7c7f37"
	I1120 21:23:41.027010  578224 cri.go:89] found id: "e73953c845da89581a4014650b3b48e09e190cf5fd1bae761b09dc8bee64105b"
	I1120 21:23:41.027015  578224 cri.go:89] found id: "0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	I1120 21:23:41.027017  578224 cri.go:89] found id: "369fc37c64c4ba9c20d07c36ceb3590cd05d4d24197202d054e34de8c1658b85"
	I1120 21:23:41.027020  578224 cri.go:89] found id: ""
	I1120 21:23:41.027056  578224 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:41.041815  578224 out.go:203] 
	W1120 21:23:41.043033  578224 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:23:41.043049  578224 out.go:285] * 
	* 
	W1120 21:23:41.047999  578224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:23:41.049408  578224 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-714571 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-714571
helpers_test.go:243: (dbg) docker inspect embed-certs-714571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240",
	        "Created": "2025-11-20T21:21:34.898715026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 565066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:22:39.336356021Z",
	            "FinishedAt": "2025-11-20T21:22:37.720488069Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/hosts",
	        "LogPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240-json.log",
	        "Name": "/embed-certs-714571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-714571:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-714571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240",
	                "LowerDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-714571",
	                "Source": "/var/lib/docker/volumes/embed-certs-714571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-714571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-714571",
	                "name.minikube.sigs.k8s.io": "embed-certs-714571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4dac0e46f46cd8de31033ca170513fdb9e74dc3dff1f5af75cdbcceb26c387c9",
	            "SandboxKey": "/var/run/docker/netns/4dac0e46f46c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-714571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ab433249a4ff0be5f1bb45e1da7b7dc47bc44c49beb110d4c515f5ebe9f33a4",
	                    "EndpointID": "0ee8eb1f167b8236d2d0801f6708f4143fe5473e1054cd0ab5c57ddc3fc66451",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:5a:9b:8e:e2:b4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-714571",
	                        "ccf93eabab84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571: exit status 2 (342.039815ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-714571 logs -n 25
E1120 21:23:41.879968  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-714571 logs -n 25: (1.124704172s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ stop    │ -p newest-cni-678421 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ image   │ embed-certs-714571 image list --format=json                                                                                                                                                                                                   │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p embed-certs-714571 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:06.049245  571789 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:06.049501  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049513  571789 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:06.049519  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049841  571789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:06.050567  571789 out.go:368] Setting JSON to false
	I1120 21:23:06.052400  571789 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14728,"bootTime":1763659058,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:06.052535  571789 start.go:143] virtualization: kvm guest
	I1120 21:23:06.055111  571789 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:06.056602  571789 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:06.056605  571789 notify.go:221] Checking for updates...
	I1120 21:23:06.062930  571789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:06.067567  571789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:06.069232  571789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:06.070624  571789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:06.072902  571789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:06.074784  571789 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.074945  571789 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075081  571789 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075229  571789 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:06.120678  571789 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:06.120819  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.216315  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.199460321 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.216453  571789 docker.go:319] overlay module found
	I1120 21:23:06.218400  571789 out.go:179] * Using the docker driver based on user configuration
	I1120 21:23:06.219702  571789 start.go:309] selected driver: docker
	I1120 21:23:06.219714  571789 start.go:930] validating driver "docker" against <nil>
	I1120 21:23:06.219729  571789 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:06.220696  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.302193  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.29041782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.302376  571789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1120 21:23:06.302402  571789 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1120 21:23:06.302588  571789 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:06.305364  571789 out.go:179] * Using Docker driver with root privileges
	I1120 21:23:06.306728  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:06.306783  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:06.306792  571789 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:23:06.306891  571789 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:06.308307  571789 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:06.309596  571789 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:06.311056  571789 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:06.312309  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.312345  571789 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:06.312360  571789 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:06.312412  571789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:06.312479  571789 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:06.312494  571789 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:06.312653  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:06.312677  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json: {Name:mkf4f376b35371249315ca8102adde29558a901f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:06.340931  571789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:06.340959  571789 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:06.340975  571789 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:06.341010  571789 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:06.341132  571789 start.go:364] duration metric: took 97.864µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:06.341163  571789 start.go:93] Provisioning new machine with config: &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:06.341279  571789 start.go:125] createHost starting for "" (driver="docker")
	W1120 21:23:05.393230  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:07.891482  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	I1120 21:23:05.205163  567536 addons.go:515] duration metric: took 2.420707864s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:05.695398  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:05.702083  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:05.702112  567536 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:06.195506  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:06.201376  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 21:23:06.202743  567536 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:06.202819  567536 api_server.go:131] duration metric: took 1.008149378s to wait for apiserver health ...
	I1120 21:23:06.202844  567536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:06.209670  567536 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:06.209779  567536 system_pods.go:61] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.209798  567536 system_pods.go:61] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.209807  567536 system_pods.go:61] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.209817  567536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.209832  567536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.209838  567536 system_pods.go:61] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.209845  567536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.209856  567536 system_pods.go:61] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.209865  567536 system_pods.go:74] duration metric: took 7.010955ms to wait for pod list to return data ...
	I1120 21:23:06.209877  567536 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:06.215993  567536 default_sa.go:45] found service account: "default"
	I1120 21:23:06.216099  567536 default_sa.go:55] duration metric: took 6.211471ms for default service account to be created ...
	I1120 21:23:06.216167  567536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:23:06.219656  567536 system_pods.go:86] 8 kube-system pods found
	I1120 21:23:06.219693  567536 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.219715  567536 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.219722  567536 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.219731  567536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.219739  567536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.219745  567536 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.219754  567536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.219761  567536 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.219771  567536 system_pods.go:126] duration metric: took 3.576854ms to wait for k8s-apps to be running ...
	I1120 21:23:06.219780  567536 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:23:06.219827  567536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:06.242346  567536 system_svc.go:56] duration metric: took 22.555852ms WaitForService to wait for kubelet
	I1120 21:23:06.242379  567536 kubeadm.go:587] duration metric: took 3.45805481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:23:06.242401  567536 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:06.248588  567536 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:06.248623  567536 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:06.248641  567536 node_conditions.go:105] duration metric: took 6.233957ms to run NodePressure ...
	I1120 21:23:06.248657  567536 start.go:242] waiting for startup goroutines ...
	I1120 21:23:06.248666  567536 start.go:247] waiting for cluster config update ...
	I1120 21:23:06.248680  567536 start.go:256] writing updated cluster config ...
	I1120 21:23:06.249011  567536 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:06.254875  567536 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:06.260944  567536 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:23:08.267255  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:06.343254  571789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:23:06.343455  571789 start.go:159] libmachine.API.Create for "newest-cni-678421" (driver="docker")
	I1120 21:23:06.343482  571789 client.go:173] LocalClient.Create starting
	I1120 21:23:06.343553  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:23:06.343582  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343598  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.343655  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:23:06.343676  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343686  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.344001  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:23:06.362461  571789 cli_runner.go:211] docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:23:06.362549  571789 network_create.go:284] running [docker network inspect newest-cni-678421] to gather additional debugging logs...
	I1120 21:23:06.362568  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421
	W1120 21:23:06.383025  571789 cli_runner.go:211] docker network inspect newest-cni-678421 returned with exit code 1
	I1120 21:23:06.383064  571789 network_create.go:287] error running [docker network inspect newest-cni-678421]: docker network inspect newest-cni-678421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-678421 not found
	I1120 21:23:06.383078  571789 network_create.go:289] output of [docker network inspect newest-cni-678421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-678421 not found
	
	** /stderr **
	I1120 21:23:06.383171  571789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:06.403776  571789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:23:06.404546  571789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:23:06.405526  571789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:23:06.406341  571789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1ab433249a4f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:74:b3:0e:d4:91} reservation:<nil>}
	I1120 21:23:06.407123  571789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-4a91837c366f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:66:0c:88:d0:b5:58} reservation:<nil>}
	I1120 21:23:06.407767  571789 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6bf71dac4c7d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:29:7e:d9:60:3c} reservation:<nil>}
	I1120 21:23:06.408763  571789 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f00f50}
	I1120 21:23:06.408794  571789 network_create.go:124] attempt to create docker network newest-cni-678421 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1120 21:23:06.408864  571789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-678421 newest-cni-678421
	I1120 21:23:06.467067  571789 network_create.go:108] docker network newest-cni-678421 192.168.103.0/24 created
	I1120 21:23:06.467117  571789 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-678421" container
	I1120 21:23:06.467193  571789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:23:06.485312  571789 cli_runner.go:164] Run: docker volume create newest-cni-678421 --label name.minikube.sigs.k8s.io=newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:23:06.505057  571789 oci.go:103] Successfully created a docker volume newest-cni-678421
	I1120 21:23:06.505146  571789 cli_runner.go:164] Run: docker run --rm --name newest-cni-678421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --entrypoint /usr/bin/test -v newest-cni-678421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:23:06.958057  571789 oci.go:107] Successfully prepared a docker volume newest-cni-678421
	I1120 21:23:06.958140  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.958154  571789 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:23:06.958256  571789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1120 21:23:09.892319  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:11.894030  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:10.767056  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:12.767732  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:11.773995  571789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.815679837s)
	I1120 21:23:11.774033  571789 kic.go:203] duration metric: took 4.815876955s to extract preloaded images to volume ...
	W1120 21:23:11.774136  571789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:23:11.774185  571789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:23:11.774253  571789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:23:11.850339  571789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-678421 --name newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-678421 --network newest-cni-678421 --ip 192.168.103.2 --volume newest-cni-678421:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:23:12.533350  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Running}}
	I1120 21:23:12.555197  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.575307  571789 cli_runner.go:164] Run: docker exec newest-cni-678421 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:23:12.632671  571789 oci.go:144] the created container "newest-cni-678421" has a running status.
	I1120 21:23:12.632720  571789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa...
	I1120 21:23:12.863151  571789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:23:12.899100  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.920234  571789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:23:12.920260  571789 kic_runner.go:114] Args: [docker exec --privileged newest-cni-678421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:23:12.970999  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.993837  571789 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:12.993956  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.013867  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.014157  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.014178  571789 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:13.161308  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.161339  571789 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:13.161406  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.181829  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.182058  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.182073  571789 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:13.328927  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.329019  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.349098  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.349376  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.349398  571789 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:13.484139  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:13.484177  571789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:13.484259  571789 ubuntu.go:190] setting up certificates
	I1120 21:23:13.484275  571789 provision.go:84] configureAuth start
	I1120 21:23:13.484350  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:13.503703  571789 provision.go:143] copyHostCerts
	I1120 21:23:13.503779  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:13.503794  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:13.503883  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:13.504018  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:13.504032  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:13.504073  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:13.504158  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:13.504168  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:13.504202  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:13.504315  571789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:13.626916  571789 provision.go:177] copyRemoteCerts
	I1120 21:23:13.626988  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:13.627031  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.646188  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:13.742867  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:13.765755  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:13.787099  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:13.810322  571789 provision.go:87] duration metric: took 326.026448ms to configureAuth
	I1120 21:23:13.810353  571789 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:13.810568  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:13.810697  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.837968  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.838338  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.838366  571789 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:14.162945  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:14.162974  571789 machine.go:97] duration metric: took 1.169111697s to provisionDockerMachine
	I1120 21:23:14.162987  571789 client.go:176] duration metric: took 7.819496914s to LocalClient.Create
	I1120 21:23:14.163010  571789 start.go:167] duration metric: took 7.81955499s to libmachine.API.Create "newest-cni-678421"
	I1120 21:23:14.163019  571789 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:14.163030  571789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:14.163109  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:14.163159  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.187939  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.299873  571789 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:14.304403  571789 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:14.304436  571789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:14.304458  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:14.304511  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:14.304580  571789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:14.304666  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:14.315114  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:14.341203  571789 start.go:296] duration metric: took 178.161388ms for postStartSetup
	I1120 21:23:14.341644  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.364787  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:14.365126  571789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:14.365189  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.388501  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.491729  571789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:14.498714  571789 start.go:128] duration metric: took 8.157415645s to createHost
	I1120 21:23:14.498748  571789 start.go:83] releasing machines lock for "newest-cni-678421", held for 8.157600418s
	I1120 21:23:14.498845  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.524498  571789 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:14.524558  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.524576  571789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:14.524652  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.549686  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.550328  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.730932  571789 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:14.739895  571789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:14.789379  571789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:14.795855  571789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:14.795934  571789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:14.829432  571789 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:23:14.829462  571789 start.go:496] detecting cgroup driver to use...
	I1120 21:23:14.829510  571789 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:14.829589  571789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:14.851761  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:14.867809  571789 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:14.867934  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:14.892255  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:14.918730  571789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:15.037147  571789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:15.171533  571789 docker.go:234] disabling docker service ...
	I1120 21:23:15.171611  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:15.196938  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:15.214136  571789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:15.323780  571789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:15.444697  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:15.464324  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:15.484640  571789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:15.484705  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.499771  571789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:15.499842  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.512691  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.526079  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.538826  571789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:15.550121  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.562853  571789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.582104  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.595993  571789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:15.606890  571789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:15.617086  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:15.737596  571789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:16.600257  571789 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:16.600349  571789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:16.605892  571789 start.go:564] Will wait 60s for crictl version
	I1120 21:23:16.606027  571789 ssh_runner.go:195] Run: which crictl
	I1120 21:23:16.610690  571789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:16.637058  571789 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:16.637154  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.670116  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.704078  571789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:16.705267  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:16.724295  571789 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:16.728925  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.741905  571789 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1120 21:23:14.392714  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:16.891564  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:15.268024  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:17.768172  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:16.742987  571789 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:16.743128  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:16.743179  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.780101  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.780125  571789 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:16.780172  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.809837  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.809872  571789 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:16.809883  571789 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:16.810002  571789 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:16.810090  571789 ssh_runner.go:195] Run: crio config
	I1120 21:23:16.863639  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:16.863659  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:16.863681  571789 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:16.863704  571789 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:16.863822  571789 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:16.863884  571789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:16.873403  571789 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:16.873494  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:16.881985  571789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:16.896085  571789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:16.913519  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:16.928859  571789 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:16.933334  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.945027  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:17.031776  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:17.058982  571789 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:17.059010  571789 certs.go:195] generating shared ca certs ...
	I1120 21:23:17.059029  571789 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.059186  571789 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:17.059248  571789 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:17.059262  571789 certs.go:257] generating profile certs ...
	I1120 21:23:17.059323  571789 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:17.059344  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt with IP's: []
	I1120 21:23:17.213357  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt ...
	I1120 21:23:17.213389  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt: {Name:mke2db14d5c940e88a112fbde2b7f7a5c236c264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213571  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key ...
	I1120 21:23:17.213582  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key: {Name:mk64627472328d961f5d0acc5bb1ae55a18c598e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213666  571789 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:17.213689  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1120 21:23:17.465354  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb ...
	I1120 21:23:17.465382  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb: {Name:mk1f657111bdac9ee1dbd7f52b9080823e78b0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465538  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb ...
	I1120 21:23:17.465551  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb: {Name:mk0b65e76824a55204f187e73dc35407cb7853bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465624  571789 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt
	I1120 21:23:17.465704  571789 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key
	I1120 21:23:17.465758  571789 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:17.465775  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt with IP's: []
	I1120 21:23:17.786236  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt ...
	I1120 21:23:17.786271  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt: {Name:mkf64e7d9fa7e272a656caab1db35f0d50079c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.786461  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key ...
	I1120 21:23:17.786477  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key: {Name:mkadbe10d3a0cb1e1581b893a1e5760fc272fd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.787184  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:17.787274  571789 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:17.787292  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:17.787316  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:17.787339  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:17.787359  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:17.787408  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:17.788027  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:17.809571  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:17.829725  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:17.850042  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:17.870161  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:17.891028  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:17.910446  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:17.930120  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:17.949077  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:17.975331  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:17.995043  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:18.013730  571789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:18.027267  571789 ssh_runner.go:195] Run: openssl version
	I1120 21:23:18.033999  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.042006  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:18.049852  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053838  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053894  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.092857  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:18.101344  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:23:18.109957  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.119032  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:18.127682  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132138  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132200  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.181104  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:18.189712  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/254094.pem /etc/ssl/certs/51391683.0
	I1120 21:23:18.198524  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.206158  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:18.213580  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217316  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217376  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.254832  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.263424  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2540942.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.272052  571789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:18.276149  571789 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:23:18.276225  571789 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:18.276317  571789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:18.276376  571789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:18.305337  571789 cri.go:89] found id: ""
	I1120 21:23:18.305409  571789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:18.314096  571789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:23:18.322873  571789 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:23:18.322928  571789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:23:18.331021  571789 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:23:18.331048  571789 kubeadm.go:158] found existing configuration files:
	
	I1120 21:23:18.331102  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:23:18.338959  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:23:18.339007  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:23:18.346732  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:23:18.354398  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:23:18.354456  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:23:18.361888  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.370477  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:23:18.370533  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.378355  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:23:18.387242  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:23:18.387302  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:23:18.397935  571789 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:23:18.476910  571789 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 21:23:18.555544  571789 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1120 21:23:19.391574  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:21.392350  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:23.892432  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:20.268264  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:22.768335  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:25.391690  564741 pod_ready.go:94] pod "coredns-66bc5c9577-g47lf" is "Ready"
	I1120 21:23:25.391743  564741 pod_ready.go:86] duration metric: took 35.50597602s for pod "coredns-66bc5c9577-g47lf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.394978  564741 pod_ready.go:83] waiting for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.399661  564741 pod_ready.go:94] pod "etcd-embed-certs-714571" is "Ready"
	I1120 21:23:25.399686  564741 pod_ready.go:86] duration metric: took 4.680651ms for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.402021  564741 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.405973  564741 pod_ready.go:94] pod "kube-apiserver-embed-certs-714571" is "Ready"
	I1120 21:23:25.405997  564741 pod_ready.go:86] duration metric: took 3.949841ms for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.407841  564741 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.589324  564741 pod_ready.go:94] pod "kube-controller-manager-embed-certs-714571" is "Ready"
	I1120 21:23:25.589354  564741 pod_ready.go:86] duration metric: took 181.489846ms for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.789548  564741 pod_ready.go:83] waiting for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.190409  564741 pod_ready.go:94] pod "kube-proxy-nlj6n" is "Ready"
	I1120 21:23:26.190444  564741 pod_ready.go:86] duration metric: took 400.867423ms for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.390084  564741 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.789386  564741 pod_ready.go:94] pod "kube-scheduler-embed-certs-714571" is "Ready"
	I1120 21:23:26.789415  564741 pod_ready.go:86] duration metric: took 399.299576ms for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.789427  564741 pod_ready.go:40] duration metric: took 36.907183518s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:26.838827  564741 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:26.847832  564741 out.go:179] * Done! kubectl is now configured to use "embed-certs-714571" cluster and "default" namespace by default
	W1120 21:23:25.269305  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:27.766813  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:29.313350  571789 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:23:29.313459  571789 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:23:29.313610  571789 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:23:29.313681  571789 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 21:23:29.313746  571789 kubeadm.go:319] OS: Linux
	I1120 21:23:29.313822  571789 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:23:29.313901  571789 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:23:29.313981  571789 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:23:29.314064  571789 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:23:29.314133  571789 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:23:29.314196  571789 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:23:29.314321  571789 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:23:29.314392  571789 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 21:23:29.314498  571789 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:23:29.314637  571789 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:23:29.314764  571789 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:23:29.314845  571789 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:23:29.316777  571789 out.go:252]   - Generating certificates and keys ...
	I1120 21:23:29.316887  571789 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:23:29.316965  571789 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:23:29.317057  571789 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:23:29.317139  571789 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:23:29.317270  571789 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:23:29.317353  571789 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:23:29.317420  571789 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:23:29.317573  571789 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-678421] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1120 21:23:29.317651  571789 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:23:29.317860  571789 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-678421] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1120 21:23:29.317969  571789 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:23:29.318061  571789 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:23:29.318157  571789 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:23:29.318279  571789 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:23:29.318369  571789 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:23:29.318477  571789 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:23:29.318544  571789 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:23:29.318663  571789 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:23:29.318746  571789 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:23:29.318856  571789 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:23:29.318951  571789 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:23:29.320271  571789 out.go:252]   - Booting up control plane ...
	I1120 21:23:29.320352  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:23:29.320414  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:23:29.320475  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:23:29.320580  571789 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:23:29.320662  571789 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:23:29.320749  571789 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:23:29.320816  571789 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:23:29.320848  571789 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:23:29.320957  571789 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:23:29.321044  571789 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:23:29.321097  571789 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.080831ms
	I1120 21:23:29.321173  571789 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:23:29.321275  571789 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1120 21:23:29.321347  571789 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:23:29.321409  571789 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:23:29.321471  571789 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.185451221s
	I1120 21:23:29.321533  571789 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.636543543s
	I1120 21:23:29.321592  571789 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002088865s
	I1120 21:23:29.321680  571789 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:23:29.321892  571789 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:23:29.321990  571789 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:23:29.322312  571789 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-678421 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:23:29.322381  571789 kubeadm.go:319] [bootstrap-token] Using token: bgtwzb.1jmxu7h8xrihsar6
	I1120 21:23:29.324385  571789 out.go:252]   - Configuring RBAC rules ...
	I1120 21:23:29.324482  571789 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:23:29.324564  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:23:29.324693  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:23:29.324827  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:23:29.324923  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:23:29.324996  571789 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:23:29.325094  571789 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:23:29.325134  571789 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:23:29.325175  571789 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:23:29.325181  571789 kubeadm.go:319] 
	I1120 21:23:29.325253  571789 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:23:29.325263  571789 kubeadm.go:319] 
	I1120 21:23:29.325343  571789 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:23:29.325356  571789 kubeadm.go:319] 
	I1120 21:23:29.325382  571789 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:23:29.325434  571789 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:23:29.325475  571789 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:23:29.325480  571789 kubeadm.go:319] 
	I1120 21:23:29.325522  571789 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:23:29.325529  571789 kubeadm.go:319] 
	I1120 21:23:29.325566  571789 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:23:29.325571  571789 kubeadm.go:319] 
	I1120 21:23:29.325614  571789 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:23:29.325680  571789 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:23:29.325744  571789 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:23:29.325751  571789 kubeadm.go:319] 
	I1120 21:23:29.325875  571789 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:23:29.325954  571789 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:23:29.325961  571789 kubeadm.go:319] 
	I1120 21:23:29.326041  571789 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bgtwzb.1jmxu7h8xrihsar6 \
	I1120 21:23:29.326142  571789 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d \
	I1120 21:23:29.326163  571789 kubeadm.go:319] 	--control-plane 
	I1120 21:23:29.326170  571789 kubeadm.go:319] 
	I1120 21:23:29.326275  571789 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:23:29.326284  571789 kubeadm.go:319] 
	I1120 21:23:29.326350  571789 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bgtwzb.1jmxu7h8xrihsar6 \
	I1120 21:23:29.326438  571789 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d 
	I1120 21:23:29.326473  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:29.326482  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:29.327679  571789 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:23:29.328583  571789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:23:29.332973  571789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:23:29.332989  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:23:29.346147  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:23:29.573681  571789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:23:29.573755  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:29.573814  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-678421 minikube.k8s.io/updated_at=2025_11_20T21_23_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=newest-cni-678421 minikube.k8s.io/primary=true
	I1120 21:23:29.667824  571789 ops.go:34] apiserver oom_adj: -16
	I1120 21:23:29.667832  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:30.168146  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:30.667910  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 21:23:30.266667  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:32.767122  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:31.168579  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:31.668019  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:32.168742  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:32.668690  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:33.168318  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:33.668957  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.168656  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.668775  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.751348  571789 kubeadm.go:1114] duration metric: took 5.177641354s to wait for elevateKubeSystemPrivileges
	I1120 21:23:34.751396  571789 kubeadm.go:403] duration metric: took 16.475185755s to StartCluster
	I1120 21:23:34.751420  571789 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:34.751503  571789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:34.753522  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:34.753817  571789 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:34.753838  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:23:34.753968  571789 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:34.754086  571789 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:34.754105  571789 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	I1120 21:23:34.754113  571789 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:34.754136  571789 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:34.754148  571789 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:34.754313  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:34.754557  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.754792  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.756687  571789 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:34.759774  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:34.786514  571789 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:34.787803  571789 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:34.787913  571789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:34.788029  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:34.790763  571789 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	I1120 21:23:34.790812  571789 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:34.791301  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.823476  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:34.825705  571789 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:34.825731  571789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:34.825787  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:34.852807  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:34.872624  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:23:34.937278  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:34.942585  571789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:34.968135  571789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:35.062180  571789 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1120 21:23:35.063734  571789 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:35.063787  571789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:35.251967  571789 api_server.go:72] duration metric: took 498.105523ms to wait for apiserver process to appear ...
	I1120 21:23:35.252003  571789 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:35.252029  571789 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:35.257634  571789 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:35.258651  571789 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:35.258686  571789 api_server.go:131] duration metric: took 6.676193ms to wait for apiserver health ...
	I1120 21:23:35.258695  571789 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:35.258787  571789 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:23:35.260410  571789 addons.go:515] duration metric: took 506.448699ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:23:35.261510  571789 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:35.261546  571789 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:35.261567  571789 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:35.261587  571789 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:35.261597  571789 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:35.261608  571789 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:35.261621  571789 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:35.261629  571789 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:35.261638  571789 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:35.261647  571789 system_pods.go:74] duration metric: took 2.944734ms to wait for pod list to return data ...
	I1120 21:23:35.261657  571789 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:35.264365  571789 default_sa.go:45] found service account: "default"
	I1120 21:23:35.264386  571789 default_sa.go:55] duration metric: took 2.722096ms for default service account to be created ...
	I1120 21:23:35.264399  571789 kubeadm.go:587] duration metric: took 510.545674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:35.264416  571789 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:35.266974  571789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:35.267006  571789 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:35.267024  571789 node_conditions.go:105] duration metric: took 2.601571ms to run NodePressure ...
	I1120 21:23:35.267040  571789 start.go:242] waiting for startup goroutines ...
	I1120 21:23:35.567181  571789 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-678421" context rescaled to 1 replicas
	I1120 21:23:35.567240  571789 start.go:247] waiting for cluster config update ...
	I1120 21:23:35.567257  571789 start.go:256] writing updated cluster config ...
	I1120 21:23:35.567561  571789 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:35.624092  571789 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:35.625572  571789 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	W1120 21:23:34.771353  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:37.268021  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.73338383Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.733415124Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.733433837Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.737556472Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.737579052Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.737598145Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.741510597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.741541504Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.773007403Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.777849473Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.77788694Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.7779109Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.786825193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.786861026Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:23:10 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.978512567Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=70c5627c-f4df-4e98-bc00-6ad9826bfd62 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:10 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.995033049Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=646babb9-cb44-4b4c-876a-172b70f40dc3 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.998891701Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper" id=1d435a08-babd-4e87-b29d-cddd67f9c4bf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.999066581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.044321012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.045020321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.319277471Z" level=info msg="Created container 0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper" id=1d435a08-babd-4e87-b29d-cddd67f9c4bf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.320078473Z" level=info msg="Starting container: 0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4" id=13d3bd6d-a7d4-4955-afa7-a67c46f886e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.322209772Z" level=info msg="Started container" PID=1779 containerID=0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper id=13d3bd6d-a7d4-4955-afa7-a67c46f886e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=928656077e513dc15b949906f88480c46a990846593584b9a63121826ff7ab03
	Nov 20 21:23:12 embed-certs-714571 crio[570]: time="2025-11-20T21:23:12.088354872Z" level=info msg="Removing container: eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997" id=19a9a34e-c3e3-4c1c-a61b-ec25b0d8e7d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:12 embed-certs-714571 crio[570]: time="2025-11-20T21:23:12.177865261Z" level=info msg="Removed container eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper" id=19a9a34e-c3e3-4c1c-a61b-ec25b0d8e7d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0b66165ea5d62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   928656077e513       dashboard-metrics-scraper-6ffb444bf9-zjxvl   kubernetes-dashboard
	369fc37c64c4b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   dbba6ecb55c22       kubernetes-dashboard-855c9754f9-km7xn        kubernetes-dashboard
	a23c77cfc3824       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Running             storage-provisioner         1                   0aeec5dd20432       storage-provisioner                          kube-system
	24cb3553d837b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   0aeec5dd20432       storage-provisioner                          kube-system
	7711d5f53716a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   7f766284e7df5       coredns-66bc5c9577-g47lf                     kube-system
	9a183b300cec1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   4f6d9a7048b38       busybox                                      default
	eb70c0bf6966d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   9d6eea8e1c64a       kube-proxy-nlj6n                             kube-system
	a71da522ea0a7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   eb88deef215ae       kindnet-5ctwj                                kube-system
	211d625d3d512       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   6e9394a6ab71c       kube-controller-manager-embed-certs-714571   kube-system
	1fb52640b776a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   d51d1c82cfc47       kube-scheduler-embed-certs-714571            kube-system
	037a8b45fa83d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   8bed5f3694fb9       kube-apiserver-embed-certs-714571            kube-system
	e73953c845da8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   48b5a2df32eb0       etcd-embed-certs-714571                      kube-system
	
	
	==> coredns [7711d5f53716a82a70b54cb8d5a82ef958fcea1ff3f62034d3762b2fb069314a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42065 - 28366 "HINFO IN 8354263009836140719.3001095838449680950. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078162508s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-714571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-714571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-714571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_21_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-714571
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-714571
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                b8c8edd2-d291-40b8-8776-13cdc9b6d9a8
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-g47lf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-714571                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-5ctwj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-714571             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-714571    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-nlj6n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-714571             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zjxvl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-km7xn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 119s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 119s)  kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 119s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node embed-certs-714571 event: Registered Node embed-certs-714571 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-714571 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)    kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)    kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)    kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node embed-certs-714571 event: Registered Node embed-certs-714571 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [e73953c845da89581a4014650b3b48e09e190cf5fd1bae761b09dc8bee64105b] <==
	{"level":"warn","ts":"2025-11-20T21:22:47.745010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.752067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.758897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.766434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.775263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.781600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.787992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.800376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.806422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.812750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.819727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.826713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.833055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.840867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.848147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.854889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.861517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.868945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.875199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.888436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.894863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.903062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.952005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:23:11.452288Z","caller":"traceutil/trace.go:172","msg":"trace[903564221] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"128.128496ms","start":"2025-11-20T21:23:11.324134Z","end":"2025-11-20T21:23:11.452262Z","steps":["trace[903564221] 'process raft request'  (duration: 127.9271ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:23:11.716896Z","caller":"traceutil/trace.go:172","msg":"trace[1583924621] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"119.543928ms","start":"2025-11-20T21:23:11.597328Z","end":"2025-11-20T21:23:11.716872Z","steps":["trace[1583924621] 'process raft request'  (duration: 119.330247ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:23:42 up  4:06,  0 user,  load average: 3.83, 4.51, 2.95
	Linux embed-certs-714571 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a71da522ea0a7688f16bc3aa91232370fb2211f6549b85cab0f482152d953d06] <==
	I1120 21:22:49.525955       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:22:49.526179       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:49.526205       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:49.526256       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:49.728838       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:49.728860       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:49.728869       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:49.728988       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:22:49.729124       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 21:22:49.801851       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 21:22:50.929054       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:50.929094       1 metrics.go:72] Registering metrics
	I1120 21:22:50.929236       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:59.728565       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:22:59.728641       1 main.go:301] handling current node
	I1120 21:23:09.734328       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:09.734367       1 main.go:301] handling current node
	I1120 21:23:19.728385       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:19.728419       1 main.go:301] handling current node
	I1120 21:23:29.730519       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:29.730558       1 main.go:301] handling current node
	I1120 21:23:39.730755       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:39.730791       1 main.go:301] handling current node
	
	
	==> kube-apiserver [037a8b45fa83df6929315e2b0cfb4dec7b265ac732145109a7a92495ce7c7f37] <==
	I1120 21:22:48.417652       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 21:22:48.418058       1 aggregator.go:171] initial CRD sync complete...
	I1120 21:22:48.418069       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:22:48.418075       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:22:48.418081       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:22:48.417700       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 21:22:48.417797       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:22:48.418756       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:22:48.419188       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:22:48.419816       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:22:48.419861       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:22:48.425909       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:22:48.446191       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:22:48.465288       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:48.649067       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:22:48.681990       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:22:48.701120       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:48.708003       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:48.716548       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:22:48.750729       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.56.41"}
	I1120 21:22:48.761837       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.235.194"}
	I1120 21:22:49.321686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:52.133392       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:22:52.235712       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:22:52.333361       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [211d625d3d512414e8503f451d8f2d5b09a473bea4d6cea10654872ca03ed28c] <==
	I1120 21:22:51.736339       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:22:51.749605       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:51.751938       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:22:51.780500       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:22:51.780537       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:22:51.780558       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:22:51.780601       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:22:51.780703       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:22:51.780734       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:22:51.780810       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:22:51.780827       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:22:51.780846       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:22:51.781428       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:22:51.781470       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:22:51.785783       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:22:51.785867       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:22:51.785898       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:22:51.785908       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:22:51.785913       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:22:51.786930       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:51.796122       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:22:51.801496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:22:51.801512       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:22:51.801521       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:22:51.805620       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [eb70c0bf6966de6944714ef82d994081eb2a5388ae7b02f4dde2b864a14d3f45] <==
	I1120 21:22:49.368473       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:22:49.453555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:22:49.553911       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:22:49.553961       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:22:49.554055       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:22:49.575683       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:49.575752       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:22:49.581298       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:22:49.581716       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:22:49.581756       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:49.583198       1 config.go:200] "Starting service config controller"
	I1120 21:22:49.583316       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:22:49.583380       1 config.go:309] "Starting node config controller"
	I1120 21:22:49.583395       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:22:49.583402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:22:49.583468       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:22:49.583578       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:22:49.583600       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:22:49.583709       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:22:49.683969       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:22:49.684040       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:22:49.684051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1fb52640b776a50023e67e558d7cb269726d7e01003d1467c20bd70139dad7d0] <==
	I1120 21:22:47.775097       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:22:48.389016       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:22:48.389048       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:48.395160       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:22:48.395368       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:22:48.395392       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:22:48.395434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:22:48.395859       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:48.395908       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:48.395992       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:48.396010       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:48.495848       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 21:22:48.496039       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:48.496151       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:22:52 embed-certs-714571 kubelet[735]: I1120 21:22:52.474047     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zb4\" (UniqueName: \"kubernetes.io/projected/74c07910-53db-450e-8569-ca8454ffb12f-kube-api-access-64zb4\") pod \"kubernetes-dashboard-855c9754f9-km7xn\" (UID: \"74c07910-53db-450e-8569-ca8454ffb12f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-km7xn"
	Nov 20 21:22:52 embed-certs-714571 kubelet[735]: I1120 21:22:52.474072     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/74c07910-53db-450e-8569-ca8454ffb12f-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-km7xn\" (UID: \"74c07910-53db-450e-8569-ca8454ffb12f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-km7xn"
	Nov 20 21:22:52 embed-certs-714571 kubelet[735]: I1120 21:22:52.474097     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37045a96-fcea-45f3-a11b-712b1d99ad70-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zjxvl\" (UID: \"37045a96-fcea-45f3-a11b-712b1d99ad70\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl"
	Nov 20 21:22:55 embed-certs-714571 kubelet[735]: I1120 21:22:55.163299     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 21:22:56 embed-certs-714571 kubelet[735]: I1120 21:22:56.029388     735 scope.go:117] "RemoveContainer" containerID="279e1467f671ea0f2529198f8bf029057c91928c7081a424c05bedbdd993cf92"
	Nov 20 21:22:57 embed-certs-714571 kubelet[735]: I1120 21:22:57.034774     735 scope.go:117] "RemoveContainer" containerID="279e1467f671ea0f2529198f8bf029057c91928c7081a424c05bedbdd993cf92"
	Nov 20 21:22:57 embed-certs-714571 kubelet[735]: I1120 21:22:57.034936     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:22:57 embed-certs-714571 kubelet[735]: E1120 21:22:57.035158     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:22:58 embed-certs-714571 kubelet[735]: I1120 21:22:58.042654     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:22:58 embed-certs-714571 kubelet[735]: E1120 21:22:58.042858     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:22:59 embed-certs-714571 kubelet[735]: I1120 21:22:59.047051     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:22:59 embed-certs-714571 kubelet[735]: E1120 21:22:59.047273     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:22:59 embed-certs-714571 kubelet[735]: I1120 21:22:59.059492     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-km7xn" podStartSLOduration=0.870975383 podStartE2EDuration="7.059467766s" podCreationTimestamp="2025-11-20 21:22:52 +0000 UTC" firstStartedPulling="2025-11-20 21:22:52.741131595 +0000 UTC m=+6.856164465" lastFinishedPulling="2025-11-20 21:22:58.929623979 +0000 UTC m=+13.044656848" observedRunningTime="2025-11-20 21:22:59.059357823 +0000 UTC m=+13.174390700" watchObservedRunningTime="2025-11-20 21:22:59.059467766 +0000 UTC m=+13.174500645"
	Nov 20 21:23:10 embed-certs-714571 kubelet[735]: I1120 21:23:10.977884     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:23:12 embed-certs-714571 kubelet[735]: I1120 21:23:12.086632     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:23:12 embed-certs-714571 kubelet[735]: I1120 21:23:12.087016     735 scope.go:117] "RemoveContainer" containerID="0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	Nov 20 21:23:12 embed-certs-714571 kubelet[735]: E1120 21:23:12.087204     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:23:18 embed-certs-714571 kubelet[735]: I1120 21:23:18.682647     735 scope.go:117] "RemoveContainer" containerID="0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	Nov 20 21:23:18 embed-certs-714571 kubelet[735]: E1120 21:23:18.682895     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:23:29 embed-certs-714571 kubelet[735]: I1120 21:23:29.977506     735 scope.go:117] "RemoveContainer" containerID="0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	Nov 20 21:23:29 embed-certs-714571 kubelet[735]: E1120 21:23:29.977738     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: kubelet.service: Consumed 1.783s CPU time.
	
	
	==> kubernetes-dashboard [369fc37c64c4ba9c20d07c36ceb3590cd05d4d24197202d054e34de8c1658b85] <==
	2025/11/20 21:22:59 Using namespace: kubernetes-dashboard
	2025/11/20 21:22:59 Using in-cluster config to connect to apiserver
	2025/11/20 21:22:59 Using secret token for csrf signing
	2025/11/20 21:22:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:22:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:22:59 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:22:59 Generating JWE encryption key
	2025/11/20 21:22:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:22:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:22:59 Initializing JWE encryption key from synchronized object
	2025/11/20 21:22:59 Creating in-cluster Sidecar client
	2025/11/20 21:22:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:59 Serving insecurely on HTTP port: 9090
	2025/11/20 21:23:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:59 Starting overwatch
	
	
	==> storage-provisioner [24cb3553d837b9cd3b519b1d4abd2ebf5e29c85691820679474be2cb679bdd30] <==
	I1120 21:22:49.624314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:22:49.626328       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a23c77cfc3824223b4a55b4a5e91995ce7c373240ebbf5f42c31acc96ffafd62] <==
	W1120 21:23:17.746976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:19.751082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:19.755380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.759085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.765270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:23.772738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:23.780429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:25.783470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:25.787751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:27.790855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:27.797068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:29.800566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:29.805850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:31.809933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:31.813806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:33.817756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:33.822975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:35.826360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:35.830193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:37.833716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:37.838330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.842073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.846280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.849653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.854028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-714571 -n embed-certs-714571
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-714571 -n embed-certs-714571: exit status 2 (341.409388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-714571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-714571
helpers_test.go:243: (dbg) docker inspect embed-certs-714571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240",
	        "Created": "2025-11-20T21:21:34.898715026Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 565066,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:22:39.336356021Z",
	            "FinishedAt": "2025-11-20T21:22:37.720488069Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/hosts",
	        "LogPath": "/var/lib/docker/containers/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240/ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240-json.log",
	        "Name": "/embed-certs-714571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-714571:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-714571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccf93eabab841e105483df2df8a352192b50dea437cd0810dd69bceddaba5240",
	                "LowerDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75dc18a1c84f687e1499087042823149f900b1b12bd0762756a58d585f56d333/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-714571",
	                "Source": "/var/lib/docker/volumes/embed-certs-714571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-714571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-714571",
	                "name.minikube.sigs.k8s.io": "embed-certs-714571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4dac0e46f46cd8de31033ca170513fdb9e74dc3dff1f5af75cdbcceb26c387c9",
	            "SandboxKey": "/var/run/docker/netns/4dac0e46f46c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-714571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ab433249a4ff0be5f1bb45e1da7b7dc47bc44c49beb110d4c515f5ebe9f33a4",
	                    "EndpointID": "0ee8eb1f167b8236d2d0801f6708f4143fe5473e1054cd0ab5c57ddc3fc66451",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:5a:9b:8e:e2:b4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-714571",
	                        "ccf93eabab84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571: exit status 2 (343.218766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-714571 logs -n 25
E1120 21:23:44.441562  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-714571 logs -n 25: (1.120132332s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-166874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:21 UTC │                     │
	│ stop    │ -p no-preload-166874 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p embed-certs-714571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ stop    │ -p embed-certs-714571 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-454524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-454524 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ stop    │ -p newest-cni-678421 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ image   │ embed-certs-714571 image list --format=json                                                                                                                                                                                                   │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p embed-certs-714571 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:06.049245  571789 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:06.049501  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049513  571789 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:06.049519  571789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:06.049841  571789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:06.050567  571789 out.go:368] Setting JSON to false
	I1120 21:23:06.052400  571789 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14728,"bootTime":1763659058,"procs":409,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:06.052535  571789 start.go:143] virtualization: kvm guest
	I1120 21:23:06.055111  571789 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:06.056602  571789 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:06.056605  571789 notify.go:221] Checking for updates...
	I1120 21:23:06.062930  571789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:06.067567  571789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:06.069232  571789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:06.070624  571789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:06.072902  571789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:06.074784  571789 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.074945  571789 config.go:182] Loaded profile config "embed-certs-714571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075081  571789 config.go:182] Loaded profile config "no-preload-166874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:06.075229  571789 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:06.120678  571789 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:06.120819  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.216315  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.199460321 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.216453  571789 docker.go:319] overlay module found
	I1120 21:23:06.218400  571789 out.go:179] * Using the docker driver based on user configuration
	I1120 21:23:06.219702  571789 start.go:309] selected driver: docker
	I1120 21:23:06.219714  571789 start.go:930] validating driver "docker" against <nil>
	I1120 21:23:06.219729  571789 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:06.220696  571789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:06.302193  571789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-20 21:23:06.29041782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:06.302376  571789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1120 21:23:06.302402  571789 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1120 21:23:06.302588  571789 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:06.305364  571789 out.go:179] * Using Docker driver with root privileges
	I1120 21:23:06.306728  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:06.306783  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:06.306792  571789 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 21:23:06.306891  571789 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:06.308307  571789 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:06.309596  571789 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:06.311056  571789 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:06.312309  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.312345  571789 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:06.312360  571789 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:06.312412  571789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:06.312479  571789 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:06.312494  571789 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:06.312653  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:06.312677  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json: {Name:mkf4f376b35371249315ca8102adde29558a901f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:06.340931  571789 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:06.340959  571789 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:06.340975  571789 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:06.341010  571789 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:06.341132  571789 start.go:364] duration metric: took 97.864µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:06.341163  571789 start.go:93] Provisioning new machine with config: &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:06.341279  571789 start.go:125] createHost starting for "" (driver="docker")
	W1120 21:23:05.393230  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:07.891482  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	I1120 21:23:05.205163  567536 addons.go:515] duration metric: took 2.420707864s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:05.695398  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:05.702083  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:05.702112  567536 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:06.195506  567536 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1120 21:23:06.201376  567536 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1120 21:23:06.202743  567536 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:06.202819  567536 api_server.go:131] duration metric: took 1.008149378s to wait for apiserver health ...
	I1120 21:23:06.202844  567536 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:06.209670  567536 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:06.209779  567536 system_pods.go:61] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.209798  567536 system_pods.go:61] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.209807  567536 system_pods.go:61] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.209817  567536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.209832  567536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.209838  567536 system_pods.go:61] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.209845  567536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.209856  567536 system_pods.go:61] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.209865  567536 system_pods.go:74] duration metric: took 7.010955ms to wait for pod list to return data ...
	I1120 21:23:06.209877  567536 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:06.215993  567536 default_sa.go:45] found service account: "default"
	I1120 21:23:06.216099  567536 default_sa.go:55] duration metric: took 6.211471ms for default service account to be created ...
	I1120 21:23:06.216167  567536 system_pods.go:116] waiting for k8s-apps to be running ...
	I1120 21:23:06.219656  567536 system_pods.go:86] 8 kube-system pods found
	I1120 21:23:06.219693  567536 system_pods.go:89] "coredns-66bc5c9577-zkl9z" [f9d943a5-d29a-402e-ad52-29d36ed22d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1120 21:23:06.219715  567536 system_pods.go:89] "etcd-default-k8s-diff-port-454524" [c37cbe2d-1dae-4c10-8dd2-a0fa2e377a0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:06.219722  567536 system_pods.go:89] "kindnet-clzlq" [bdc96c97-76df-4b3e-ac9a-4bda9a760322] Running
	I1120 21:23:06.219731  567536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-454524" [f8f9479e-3435-43ee-a5b5-229ab4143080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:06.219739  567536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-454524" [e074be59-377e-4d5b-ba02-85f1df7f9285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:06.219745  567536 system_pods.go:89] "kube-proxy-fpnmp" [22ef496e-f864-423f-9af3-54490ba5e8fc] Running
	I1120 21:23:06.219754  567536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-454524" [7542b0d1-83e2-4a53-809c-2c01c75cb3ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:06.219761  567536 system_pods.go:89] "storage-provisioner" [bc9ffafb-037f-41fa-b27e-75d8ee4aff49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1120 21:23:06.219771  567536 system_pods.go:126] duration metric: took 3.576854ms to wait for k8s-apps to be running ...
	I1120 21:23:06.219780  567536 system_svc.go:44] waiting for kubelet service to be running ....
	I1120 21:23:06.219827  567536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:06.242346  567536 system_svc.go:56] duration metric: took 22.555852ms WaitForService to wait for kubelet
	I1120 21:23:06.242379  567536 kubeadm.go:587] duration metric: took 3.45805481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1120 21:23:06.242401  567536 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:06.248588  567536 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:06.248623  567536 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:06.248641  567536 node_conditions.go:105] duration metric: took 6.233957ms to run NodePressure ...
	I1120 21:23:06.248657  567536 start.go:242] waiting for startup goroutines ...
	I1120 21:23:06.248666  567536 start.go:247] waiting for cluster config update ...
	I1120 21:23:06.248680  567536 start.go:256] writing updated cluster config ...
	I1120 21:23:06.249011  567536 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:06.254875  567536 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:06.260944  567536 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zkl9z" in "kube-system" namespace to be "Ready" or be gone ...
	W1120 21:23:08.267255  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:06.343254  571789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1120 21:23:06.343455  571789 start.go:159] libmachine.API.Create for "newest-cni-678421" (driver="docker")
	I1120 21:23:06.343482  571789 client.go:173] LocalClient.Create starting
	I1120 21:23:06.343553  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem
	I1120 21:23:06.343582  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343598  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.343655  571789 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem
	I1120 21:23:06.343676  571789 main.go:143] libmachine: Decoding PEM data...
	I1120 21:23:06.343686  571789 main.go:143] libmachine: Parsing certificate...
	I1120 21:23:06.344001  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1120 21:23:06.362461  571789 cli_runner.go:211] docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1120 21:23:06.362549  571789 network_create.go:284] running [docker network inspect newest-cni-678421] to gather additional debugging logs...
	I1120 21:23:06.362568  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421
	W1120 21:23:06.383025  571789 cli_runner.go:211] docker network inspect newest-cni-678421 returned with exit code 1
	I1120 21:23:06.383064  571789 network_create.go:287] error running [docker network inspect newest-cni-678421]: docker network inspect newest-cni-678421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-678421 not found
	I1120 21:23:06.383078  571789 network_create.go:289] output of [docker network inspect newest-cni-678421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-678421 not found
	
	** /stderr **
	I1120 21:23:06.383171  571789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:06.403776  571789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
	I1120 21:23:06.404546  571789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-236096b62963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:6a:8a:e6:9d:fd} reservation:<nil>}
	I1120 21:23:06.405526  571789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b286adbb6956 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b3:00:b3:b1:06} reservation:<nil>}
	I1120 21:23:06.406341  571789 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1ab433249a4f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:74:b3:0e:d4:91} reservation:<nil>}
	I1120 21:23:06.407123  571789 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-4a91837c366f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:66:0c:88:d0:b5:58} reservation:<nil>}
	I1120 21:23:06.407767  571789 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6bf71dac4c7d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ee:29:7e:d9:60:3c} reservation:<nil>}
	I1120 21:23:06.408763  571789 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f00f50}
	I1120 21:23:06.408794  571789 network_create.go:124] attempt to create docker network newest-cni-678421 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1120 21:23:06.408864  571789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-678421 newest-cni-678421
	I1120 21:23:06.467067  571789 network_create.go:108] docker network newest-cni-678421 192.168.103.0/24 created
	I1120 21:23:06.467117  571789 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-678421" container
	I1120 21:23:06.467193  571789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1120 21:23:06.485312  571789 cli_runner.go:164] Run: docker volume create newest-cni-678421 --label name.minikube.sigs.k8s.io=newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true
	I1120 21:23:06.505057  571789 oci.go:103] Successfully created a docker volume newest-cni-678421
	I1120 21:23:06.505146  571789 cli_runner.go:164] Run: docker run --rm --name newest-cni-678421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --entrypoint /usr/bin/test -v newest-cni-678421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1120 21:23:06.958057  571789 oci.go:107] Successfully prepared a docker volume newest-cni-678421
	I1120 21:23:06.958140  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:06.958154  571789 kic.go:194] Starting extracting preloaded images to volume ...
	I1120 21:23:06.958256  571789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	W1120 21:23:09.892319  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:11.894030  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:10.767056  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:12.767732  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:11.773995  571789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-678421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.815679837s)
	I1120 21:23:11.774033  571789 kic.go:203] duration metric: took 4.815876955s to extract preloaded images to volume ...
	W1120 21:23:11.774136  571789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1120 21:23:11.774185  571789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1120 21:23:11.774253  571789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1120 21:23:11.850339  571789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-678421 --name newest-cni-678421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-678421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-678421 --network newest-cni-678421 --ip 192.168.103.2 --volume newest-cni-678421:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1120 21:23:12.533350  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Running}}
	I1120 21:23:12.555197  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.575307  571789 cli_runner.go:164] Run: docker exec newest-cni-678421 stat /var/lib/dpkg/alternatives/iptables
	I1120 21:23:12.632671  571789 oci.go:144] the created container "newest-cni-678421" has a running status.
	I1120 21:23:12.632720  571789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa...
	I1120 21:23:12.863151  571789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1120 21:23:12.899100  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.920234  571789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1120 21:23:12.920260  571789 kic_runner.go:114] Args: [docker exec --privileged newest-cni-678421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1120 21:23:12.970999  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:12.993837  571789 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:12.993956  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.013867  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.014157  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.014178  571789 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:13.161308  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.161339  571789 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:13.161406  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.181829  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.182058  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.182073  571789 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:13.328927  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:13.329019  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.349098  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.349376  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.349398  571789 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:13.484139  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:13.484177  571789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:13.484259  571789 ubuntu.go:190] setting up certificates
	I1120 21:23:13.484275  571789 provision.go:84] configureAuth start
	I1120 21:23:13.484350  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:13.503703  571789 provision.go:143] copyHostCerts
	I1120 21:23:13.503779  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:13.503794  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:13.503883  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:13.504018  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:13.504032  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:13.504073  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:13.504158  571789 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:13.504168  571789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:13.504202  571789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:13.504315  571789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:13.626916  571789 provision.go:177] copyRemoteCerts
	I1120 21:23:13.626988  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:13.627031  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.646188  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:13.742867  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:13.765755  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:13.787099  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:13.810322  571789 provision.go:87] duration metric: took 326.026448ms to configureAuth
	I1120 21:23:13.810353  571789 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:13.810568  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:13.810697  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:13.837968  571789 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:13.838338  571789 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1120 21:23:13.838366  571789 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:14.162945  571789 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:14.162974  571789 machine.go:97] duration metric: took 1.169111697s to provisionDockerMachine
	I1120 21:23:14.162987  571789 client.go:176] duration metric: took 7.819496914s to LocalClient.Create
	I1120 21:23:14.163010  571789 start.go:167] duration metric: took 7.81955499s to libmachine.API.Create "newest-cni-678421"
	I1120 21:23:14.163019  571789 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:14.163030  571789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:14.163109  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:14.163159  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.187939  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.299873  571789 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:14.304403  571789 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:14.304436  571789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:14.304458  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:14.304511  571789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:14.304580  571789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:14.304666  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:14.315114  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:14.341203  571789 start.go:296] duration metric: took 178.161388ms for postStartSetup
	I1120 21:23:14.341644  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.364787  571789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:14.365126  571789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:14.365189  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.388501  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.491729  571789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:14.498714  571789 start.go:128] duration metric: took 8.157415645s to createHost
	I1120 21:23:14.498748  571789 start.go:83] releasing machines lock for "newest-cni-678421", held for 8.157600418s
	I1120 21:23:14.498845  571789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:14.524498  571789 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:14.524558  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.524576  571789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:14.524652  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:14.549686  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.550328  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:14.730932  571789 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:14.739895  571789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:14.789379  571789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:14.795855  571789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:14.795934  571789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:14.829432  571789 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1120 21:23:14.829462  571789 start.go:496] detecting cgroup driver to use...
	I1120 21:23:14.829510  571789 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:14.829589  571789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:14.851761  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:14.867809  571789 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:14.867934  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:14.892255  571789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:14.918730  571789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:15.037147  571789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:15.171533  571789 docker.go:234] disabling docker service ...
	I1120 21:23:15.171611  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:15.196938  571789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:15.214136  571789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:15.323780  571789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:15.444697  571789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:15.464324  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:15.484640  571789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:15.484705  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.499771  571789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:15.499842  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.512691  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.526079  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.538826  571789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:15.550121  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.562853  571789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.582104  571789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:15.595993  571789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:15.606890  571789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:15.617086  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:15.737596  571789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:16.600257  571789 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:16.600349  571789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:16.605892  571789 start.go:564] Will wait 60s for crictl version
	I1120 21:23:16.606027  571789 ssh_runner.go:195] Run: which crictl
	I1120 21:23:16.610690  571789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:16.637058  571789 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:16.637154  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.670116  571789 ssh_runner.go:195] Run: crio --version
	I1120 21:23:16.704078  571789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:16.705267  571789 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:16.724295  571789 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:16.728925  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.741905  571789 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1120 21:23:14.392714  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:16.891564  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:15.268024  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:17.768172  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:16.742987  571789 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:16.743128  571789 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:16.743179  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.780101  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.780125  571789 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:16.780172  571789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:16.809837  571789 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:16.809872  571789 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:16.809883  571789 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:16.810002  571789 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:16.810090  571789 ssh_runner.go:195] Run: crio config
	I1120 21:23:16.863639  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:16.863659  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:16.863681  571789 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:16.863704  571789 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:16.863822  571789 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:16.863884  571789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:16.873403  571789 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:16.873494  571789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:16.881985  571789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:16.896085  571789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:16.913519  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:16.928859  571789 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:16.933334  571789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:16.945027  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:17.031776  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:17.058982  571789 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:17.059010  571789 certs.go:195] generating shared ca certs ...
	I1120 21:23:17.059029  571789 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.059186  571789 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:17.059248  571789 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:17.059262  571789 certs.go:257] generating profile certs ...
	I1120 21:23:17.059323  571789 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:17.059344  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt with IP's: []
	I1120 21:23:17.213357  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt ...
	I1120 21:23:17.213389  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.crt: {Name:mke2db14d5c940e88a112fbde2b7f7a5c236c264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213571  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key ...
	I1120 21:23:17.213582  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key: {Name:mk64627472328d961f5d0acc5bb1ae55a18c598e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.213666  571789 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:17.213689  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1120 21:23:17.465354  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb ...
	I1120 21:23:17.465382  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb: {Name:mk1f657111bdac9ee1dbd7f52b9080823e78b0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465538  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb ...
	I1120 21:23:17.465551  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb: {Name:mk0b65e76824a55204f187e73dc35407cb7853bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.465624  571789 certs.go:382] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt
	I1120 21:23:17.465704  571789 certs.go:386] copying /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb -> /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key
	I1120 21:23:17.465758  571789 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:17.465775  571789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt with IP's: []
	I1120 21:23:17.786236  571789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt ...
	I1120 21:23:17.786271  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt: {Name:mkf64e7d9fa7e272a656caab1db35f0d50079c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.786461  571789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key ...
	I1120 21:23:17.786477  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key: {Name:mkadbe10d3a0cb1e1581b893a1e5760fc272fd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:17.787184  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:17.787274  571789 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:17.787292  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:17.787316  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:17.787339  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:17.787359  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:17.787408  571789 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:17.788027  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:17.809571  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:17.829725  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:17.850042  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:17.870161  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:17.891028  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:17.910446  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:17.930120  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:17.949077  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:17.975331  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:17.995043  571789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:18.013730  571789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:18.027267  571789 ssh_runner.go:195] Run: openssl version
	I1120 21:23:18.033999  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.042006  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:18.049852  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053838  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.053894  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:18.092857  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:18.101344  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1120 21:23:18.109957  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.119032  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:18.127682  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132138  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.132200  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:18.181104  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:18.189712  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/254094.pem /etc/ssl/certs/51391683.0
	I1120 21:23:18.198524  571789 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.206158  571789 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:18.213580  571789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217316  571789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.217376  571789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:18.254832  571789 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.263424  571789 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2540942.pem /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:18.272052  571789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:18.276149  571789 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1120 21:23:18.276225  571789 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:18.276317  571789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:18.276376  571789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:18.305337  571789 cri.go:89] found id: ""
	I1120 21:23:18.305409  571789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:18.314096  571789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1120 21:23:18.322873  571789 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1120 21:23:18.322928  571789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1120 21:23:18.331021  571789 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1120 21:23:18.331048  571789 kubeadm.go:158] found existing configuration files:
	
	I1120 21:23:18.331102  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1120 21:23:18.338959  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1120 21:23:18.339007  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1120 21:23:18.346732  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1120 21:23:18.354398  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1120 21:23:18.354456  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1120 21:23:18.361888  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.370477  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1120 21:23:18.370533  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1120 21:23:18.378355  571789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1120 21:23:18.387242  571789 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1120 21:23:18.387302  571789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1120 21:23:18.397935  571789 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1120 21:23:18.476910  571789 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1120 21:23:18.555544  571789 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1120 21:23:19.391574  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:21.392350  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:23.892432  564741 pod_ready.go:104] pod "coredns-66bc5c9577-g47lf" is not "Ready", error: <nil>
	W1120 21:23:20.268264  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:22.768335  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:25.391690  564741 pod_ready.go:94] pod "coredns-66bc5c9577-g47lf" is "Ready"
	I1120 21:23:25.391743  564741 pod_ready.go:86] duration metric: took 35.50597602s for pod "coredns-66bc5c9577-g47lf" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.394978  564741 pod_ready.go:83] waiting for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.399661  564741 pod_ready.go:94] pod "etcd-embed-certs-714571" is "Ready"
	I1120 21:23:25.399686  564741 pod_ready.go:86] duration metric: took 4.680651ms for pod "etcd-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.402021  564741 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.405973  564741 pod_ready.go:94] pod "kube-apiserver-embed-certs-714571" is "Ready"
	I1120 21:23:25.405997  564741 pod_ready.go:86] duration metric: took 3.949841ms for pod "kube-apiserver-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.407841  564741 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.589324  564741 pod_ready.go:94] pod "kube-controller-manager-embed-certs-714571" is "Ready"
	I1120 21:23:25.589354  564741 pod_ready.go:86] duration metric: took 181.489846ms for pod "kube-controller-manager-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:25.789548  564741 pod_ready.go:83] waiting for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.190409  564741 pod_ready.go:94] pod "kube-proxy-nlj6n" is "Ready"
	I1120 21:23:26.190444  564741 pod_ready.go:86] duration metric: took 400.867423ms for pod "kube-proxy-nlj6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.390084  564741 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.789386  564741 pod_ready.go:94] pod "kube-scheduler-embed-certs-714571" is "Ready"
	I1120 21:23:26.789415  564741 pod_ready.go:86] duration metric: took 399.299576ms for pod "kube-scheduler-embed-certs-714571" in "kube-system" namespace to be "Ready" or be gone ...
	I1120 21:23:26.789427  564741 pod_ready.go:40] duration metric: took 36.907183518s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1120 21:23:26.838827  564741 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:26.847832  564741 out.go:179] * Done! kubectl is now configured to use "embed-certs-714571" cluster and "default" namespace by default
	W1120 21:23:25.269305  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:27.766813  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:29.313350  571789 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1120 21:23:29.313459  571789 kubeadm.go:319] [preflight] Running pre-flight checks
	I1120 21:23:29.313610  571789 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1120 21:23:29.313681  571789 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1120 21:23:29.313746  571789 kubeadm.go:319] OS: Linux
	I1120 21:23:29.313822  571789 kubeadm.go:319] CGROUPS_CPU: enabled
	I1120 21:23:29.313901  571789 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1120 21:23:29.313981  571789 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1120 21:23:29.314064  571789 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1120 21:23:29.314133  571789 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1120 21:23:29.314196  571789 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1120 21:23:29.314321  571789 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1120 21:23:29.314392  571789 kubeadm.go:319] CGROUPS_IO: enabled
	I1120 21:23:29.314498  571789 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1120 21:23:29.314637  571789 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1120 21:23:29.314764  571789 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1120 21:23:29.314845  571789 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1120 21:23:29.316777  571789 out.go:252]   - Generating certificates and keys ...
	I1120 21:23:29.316887  571789 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1120 21:23:29.316965  571789 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1120 21:23:29.317057  571789 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1120 21:23:29.317139  571789 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1120 21:23:29.317270  571789 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1120 21:23:29.317353  571789 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1120 21:23:29.317420  571789 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1120 21:23:29.317573  571789 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-678421] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1120 21:23:29.317651  571789 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1120 21:23:29.317860  571789 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-678421] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1120 21:23:29.317969  571789 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1120 21:23:29.318061  571789 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1120 21:23:29.318157  571789 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1120 21:23:29.318279  571789 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1120 21:23:29.318369  571789 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1120 21:23:29.318477  571789 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1120 21:23:29.318544  571789 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1120 21:23:29.318663  571789 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1120 21:23:29.318746  571789 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1120 21:23:29.318856  571789 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1120 21:23:29.318951  571789 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1120 21:23:29.320271  571789 out.go:252]   - Booting up control plane ...
	I1120 21:23:29.320352  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1120 21:23:29.320414  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1120 21:23:29.320475  571789 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1120 21:23:29.320580  571789 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1120 21:23:29.320662  571789 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1120 21:23:29.320749  571789 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1120 21:23:29.320816  571789 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1120 21:23:29.320848  571789 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1120 21:23:29.320957  571789 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1120 21:23:29.321044  571789 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1120 21:23:29.321097  571789 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.080831ms
	I1120 21:23:29.321173  571789 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1120 21:23:29.321275  571789 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1120 21:23:29.321347  571789 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1120 21:23:29.321409  571789 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1120 21:23:29.321471  571789 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.185451221s
	I1120 21:23:29.321533  571789 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.636543543s
	I1120 21:23:29.321592  571789 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002088865s
	I1120 21:23:29.321680  571789 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1120 21:23:29.321892  571789 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1120 21:23:29.321990  571789 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1120 21:23:29.322312  571789 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-678421 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1120 21:23:29.322381  571789 kubeadm.go:319] [bootstrap-token] Using token: bgtwzb.1jmxu7h8xrihsar6
	I1120 21:23:29.324385  571789 out.go:252]   - Configuring RBAC rules ...
	I1120 21:23:29.324482  571789 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1120 21:23:29.324564  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1120 21:23:29.324693  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1120 21:23:29.324827  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1120 21:23:29.324923  571789 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1120 21:23:29.324996  571789 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1120 21:23:29.325094  571789 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1120 21:23:29.325134  571789 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1120 21:23:29.325175  571789 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1120 21:23:29.325181  571789 kubeadm.go:319] 
	I1120 21:23:29.325253  571789 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1120 21:23:29.325263  571789 kubeadm.go:319] 
	I1120 21:23:29.325343  571789 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1120 21:23:29.325356  571789 kubeadm.go:319] 
	I1120 21:23:29.325382  571789 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1120 21:23:29.325434  571789 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1120 21:23:29.325475  571789 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1120 21:23:29.325480  571789 kubeadm.go:319] 
	I1120 21:23:29.325522  571789 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1120 21:23:29.325529  571789 kubeadm.go:319] 
	I1120 21:23:29.325566  571789 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1120 21:23:29.325571  571789 kubeadm.go:319] 
	I1120 21:23:29.325614  571789 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1120 21:23:29.325680  571789 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1120 21:23:29.325744  571789 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1120 21:23:29.325751  571789 kubeadm.go:319] 
	I1120 21:23:29.325875  571789 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1120 21:23:29.325954  571789 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1120 21:23:29.325961  571789 kubeadm.go:319] 
	I1120 21:23:29.326041  571789 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bgtwzb.1jmxu7h8xrihsar6 \
	I1120 21:23:29.326142  571789 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d \
	I1120 21:23:29.326163  571789 kubeadm.go:319] 	--control-plane 
	I1120 21:23:29.326170  571789 kubeadm.go:319] 
	I1120 21:23:29.326275  571789 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1120 21:23:29.326284  571789 kubeadm.go:319] 
	I1120 21:23:29.326350  571789 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bgtwzb.1jmxu7h8xrihsar6 \
	I1120 21:23:29.326438  571789 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:9c4103172dab36b84edc0493cee59ab4b9e2c3b8e1a54a7147a0a9ee52f4ca7d 
	I1120 21:23:29.326473  571789 cni.go:84] Creating CNI manager for ""
	I1120 21:23:29.326482  571789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:29.327679  571789 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1120 21:23:29.328583  571789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1120 21:23:29.332973  571789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1120 21:23:29.332989  571789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1120 21:23:29.346147  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1120 21:23:29.573681  571789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1120 21:23:29.573755  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:29.573814  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-678421 minikube.k8s.io/updated_at=2025_11_20T21_23_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173 minikube.k8s.io/name=newest-cni-678421 minikube.k8s.io/primary=true
	I1120 21:23:29.667824  571789 ops.go:34] apiserver oom_adj: -16
	I1120 21:23:29.667832  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:30.168146  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:30.667910  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1120 21:23:30.266667  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:32.767122  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	I1120 21:23:31.168579  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:31.668019  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:32.168742  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:32.668690  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:33.168318  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:33.668957  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.168656  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.668775  571789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1120 21:23:34.751348  571789 kubeadm.go:1114] duration metric: took 5.177641354s to wait for elevateKubeSystemPrivileges
	I1120 21:23:34.751396  571789 kubeadm.go:403] duration metric: took 16.475185755s to StartCluster
	I1120 21:23:34.751420  571789 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:34.751503  571789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:34.753522  571789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:34.753817  571789 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:34.753838  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1120 21:23:34.753968  571789 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:34.754086  571789 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:34.754105  571789 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	I1120 21:23:34.754113  571789 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:34.754136  571789 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:34.754148  571789 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:34.754313  571789 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:34.754557  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.754792  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.756687  571789 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:34.759774  571789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:34.786514  571789 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:34.787803  571789 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:34.787913  571789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:34.788029  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:34.790763  571789 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	I1120 21:23:34.790812  571789 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:34.791301  571789 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:34.823476  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:34.825705  571789 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:34.825731  571789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:34.825787  571789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:34.852807  571789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:34.872624  571789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1120 21:23:34.937278  571789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:34.942585  571789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:34.968135  571789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:35.062180  571789 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1120 21:23:35.063734  571789 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:35.063787  571789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:35.251967  571789 api_server.go:72] duration metric: took 498.105523ms to wait for apiserver process to appear ...
	I1120 21:23:35.252003  571789 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:35.252029  571789 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:35.257634  571789 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:35.258651  571789 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:35.258686  571789 api_server.go:131] duration metric: took 6.676193ms to wait for apiserver health ...
	I1120 21:23:35.258695  571789 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:35.258787  571789 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1120 21:23:35.260410  571789 addons.go:515] duration metric: took 506.448699ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1120 21:23:35.261510  571789 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:35.261546  571789 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:35.261567  571789 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:35.261587  571789 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:35.261597  571789 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:35.261608  571789 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:35.261621  571789 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:35.261629  571789 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:35.261638  571789 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:35.261647  571789 system_pods.go:74] duration metric: took 2.944734ms to wait for pod list to return data ...
	I1120 21:23:35.261657  571789 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:35.264365  571789 default_sa.go:45] found service account: "default"
	I1120 21:23:35.264386  571789 default_sa.go:55] duration metric: took 2.722096ms for default service account to be created ...
	I1120 21:23:35.264399  571789 kubeadm.go:587] duration metric: took 510.545674ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:35.264416  571789 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:35.266974  571789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:35.267006  571789 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:35.267024  571789 node_conditions.go:105] duration metric: took 2.601571ms to run NodePressure ...
	I1120 21:23:35.267040  571789 start.go:242] waiting for startup goroutines ...
	I1120 21:23:35.567181  571789 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-678421" context rescaled to 1 replicas
	I1120 21:23:35.567240  571789 start.go:247] waiting for cluster config update ...
	I1120 21:23:35.567257  571789 start.go:256] writing updated cluster config ...
	I1120 21:23:35.567561  571789 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:35.624092  571789 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:35.625572  571789 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	W1120 21:23:34.771353  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	W1120 21:23:37.268021  567536 pod_ready.go:104] pod "coredns-66bc5c9577-zkl9z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.73338383Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.733415124Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.733433837Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.737556472Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.737579052Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.737598145Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.741510597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.741541504Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.773007403Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.777849473Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.77788694Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.7779109Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.786825193Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:22:59 embed-certs-714571 crio[570]: time="2025-11-20T21:22:59.786861026Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:23:10 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.978512567Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=70c5627c-f4df-4e98-bc00-6ad9826bfd62 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:10 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.995033049Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=646babb9-cb44-4b4c-876a-172b70f40dc3 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.998891701Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper" id=1d435a08-babd-4e87-b29d-cddd67f9c4bf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:10.999066581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.044321012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.045020321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.319277471Z" level=info msg="Created container 0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper" id=1d435a08-babd-4e87-b29d-cddd67f9c4bf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.320078473Z" level=info msg="Starting container: 0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4" id=13d3bd6d-a7d4-4955-afa7-a67c46f886e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:11 embed-certs-714571 crio[570]: time="2025-11-20T21:23:11.322209772Z" level=info msg="Started container" PID=1779 containerID=0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper id=13d3bd6d-a7d4-4955-afa7-a67c46f886e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=928656077e513dc15b949906f88480c46a990846593584b9a63121826ff7ab03
	Nov 20 21:23:12 embed-certs-714571 crio[570]: time="2025-11-20T21:23:12.088354872Z" level=info msg="Removing container: eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997" id=19a9a34e-c3e3-4c1c-a61b-ec25b0d8e7d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:12 embed-certs-714571 crio[570]: time="2025-11-20T21:23:12.177865261Z" level=info msg="Removed container eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl/dashboard-metrics-scraper" id=19a9a34e-c3e3-4c1c-a61b-ec25b0d8e7d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0b66165ea5d62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   928656077e513       dashboard-metrics-scraper-6ffb444bf9-zjxvl   kubernetes-dashboard
	369fc37c64c4b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   dbba6ecb55c22       kubernetes-dashboard-855c9754f9-km7xn        kubernetes-dashboard
	a23c77cfc3824       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   0aeec5dd20432       storage-provisioner                          kube-system
	24cb3553d837b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   0aeec5dd20432       storage-provisioner                          kube-system
	7711d5f53716a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   7f766284e7df5       coredns-66bc5c9577-g47lf                     kube-system
	9a183b300cec1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   4f6d9a7048b38       busybox                                      default
	eb70c0bf6966d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   9d6eea8e1c64a       kube-proxy-nlj6n                             kube-system
	a71da522ea0a7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   eb88deef215ae       kindnet-5ctwj                                kube-system
	211d625d3d512       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   6e9394a6ab71c       kube-controller-manager-embed-certs-714571   kube-system
	1fb52640b776a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   d51d1c82cfc47       kube-scheduler-embed-certs-714571            kube-system
	037a8b45fa83d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   8bed5f3694fb9       kube-apiserver-embed-certs-714571            kube-system
	e73953c845da8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   48b5a2df32eb0       etcd-embed-certs-714571                      kube-system
	
	
	==> coredns [7711d5f53716a82a70b54cb8d5a82ef958fcea1ff3f62034d3762b2fb069314a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42065 - 28366 "HINFO IN 8354263009836140719.3001095838449680950. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.078162508s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-714571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-714571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=embed-certs-714571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_21_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-714571
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:21:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:23:18 +0000   Thu, 20 Nov 2025 21:22:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-714571
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                b8c8edd2-d291-40b8-8776-13cdc9b6d9a8
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-g47lf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-714571                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-5ctwj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-714571             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-714571    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-nlj6n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-714571             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zjxvl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-km7xn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m1s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m1s)  kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m1s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-714571 event: Registered Node embed-certs-714571 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-714571 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)  kubelet          Node embed-certs-714571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)  kubelet          Node embed-certs-714571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-714571 event: Registered Node embed-certs-714571 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [e73953c845da89581a4014650b3b48e09e190cf5fd1bae761b09dc8bee64105b] <==
	{"level":"warn","ts":"2025-11-20T21:22:47.745010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.752067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.758897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.766434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.775263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.781600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.787992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.800376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.806422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.812750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.819727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.826713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.833055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.840867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.848147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.854889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.861517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.868945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.875199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.888436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.894863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.903062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:22:47.952005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-20T21:23:11.452288Z","caller":"traceutil/trace.go:172","msg":"trace[903564221] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"128.128496ms","start":"2025-11-20T21:23:11.324134Z","end":"2025-11-20T21:23:11.452262Z","steps":["trace[903564221] 'process raft request'  (duration: 127.9271ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-20T21:23:11.716896Z","caller":"traceutil/trace.go:172","msg":"trace[1583924621] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"119.543928ms","start":"2025-11-20T21:23:11.597328Z","end":"2025-11-20T21:23:11.716872Z","steps":["trace[1583924621] 'process raft request'  (duration: 119.330247ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:23:44 up  4:06,  0 user,  load average: 3.83, 4.51, 2.95
	Linux embed-certs-714571 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a71da522ea0a7688f16bc3aa91232370fb2211f6549b85cab0f482152d953d06] <==
	I1120 21:22:49.525955       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1120 21:22:49.526179       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:22:49.526205       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:22:49.526256       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:22:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:22:49.728838       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:22:49.728860       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:22:49.728869       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:22:49.728988       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:22:49.729124       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1120 21:22:49.801851       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1120 21:22:50.929054       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:22:50.929094       1 metrics.go:72] Registering metrics
	I1120 21:22:50.929236       1 controller.go:711] "Syncing nftables rules"
	I1120 21:22:59.728565       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:22:59.728641       1 main.go:301] handling current node
	I1120 21:23:09.734328       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:09.734367       1 main.go:301] handling current node
	I1120 21:23:19.728385       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:19.728419       1 main.go:301] handling current node
	I1120 21:23:29.730519       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:29.730558       1 main.go:301] handling current node
	I1120 21:23:39.730755       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1120 21:23:39.730791       1 main.go:301] handling current node
	
	
	==> kube-apiserver [037a8b45fa83df6929315e2b0cfb4dec7b265ac732145109a7a92495ce7c7f37] <==
	I1120 21:22:48.417652       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 21:22:48.418058       1 aggregator.go:171] initial CRD sync complete...
	I1120 21:22:48.418069       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:22:48.418075       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:22:48.418081       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:22:48.417700       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 21:22:48.417797       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1120 21:22:48.418756       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1120 21:22:48.419188       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:22:48.419816       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1120 21:22:48.419861       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1120 21:22:48.425909       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:22:48.446191       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:22:48.465288       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:22:48.649067       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:22:48.681990       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:22:48.701120       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:22:48.708003       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:22:48.716548       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:22:48.750729       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.56.41"}
	I1120 21:22:48.761837       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.235.194"}
	I1120 21:22:49.321686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:22:52.133392       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:22:52.235712       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:22:52.333361       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [211d625d3d512414e8503f451d8f2d5b09a473bea4d6cea10654872ca03ed28c] <==
	I1120 21:22:51.736339       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:22:51.749605       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:51.751938       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:22:51.780500       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:22:51.780537       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1120 21:22:51.780558       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1120 21:22:51.780601       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:22:51.780703       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:22:51.780734       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:22:51.780810       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:22:51.780827       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:22:51.780846       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:22:51.781428       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:22:51.781470       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:22:51.785783       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1120 21:22:51.785867       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1120 21:22:51.785898       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1120 21:22:51.785908       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1120 21:22:51.785913       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1120 21:22:51.786930       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:22:51.796122       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1120 21:22:51.801496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:22:51.801512       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:22:51.801521       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:22:51.805620       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [eb70c0bf6966de6944714ef82d994081eb2a5388ae7b02f4dde2b864a14d3f45] <==
	I1120 21:22:49.368473       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:22:49.453555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:22:49.553911       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:22:49.553961       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1120 21:22:49.554055       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:22:49.575683       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:22:49.575752       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:22:49.581298       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:22:49.581716       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:22:49.581756       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:49.583198       1 config.go:200] "Starting service config controller"
	I1120 21:22:49.583316       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:22:49.583380       1 config.go:309] "Starting node config controller"
	I1120 21:22:49.583395       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:22:49.583402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:22:49.583468       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:22:49.583578       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:22:49.583600       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:22:49.583709       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:22:49.683969       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:22:49.684040       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:22:49.684051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1fb52640b776a50023e67e558d7cb269726d7e01003d1467c20bd70139dad7d0] <==
	I1120 21:22:47.775097       1 serving.go:386] Generated self-signed cert in-memory
	I1120 21:22:48.389016       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:22:48.389048       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:22:48.395160       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:22:48.395368       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1120 21:22:48.395392       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1120 21:22:48.395434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:22:48.395859       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:48.395908       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:48.395992       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:48.396010       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:22:48.495848       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1120 21:22:48.496039       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1120 21:22:48.496151       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:22:52 embed-certs-714571 kubelet[735]: I1120 21:22:52.474047     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zb4\" (UniqueName: \"kubernetes.io/projected/74c07910-53db-450e-8569-ca8454ffb12f-kube-api-access-64zb4\") pod \"kubernetes-dashboard-855c9754f9-km7xn\" (UID: \"74c07910-53db-450e-8569-ca8454ffb12f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-km7xn"
	Nov 20 21:22:52 embed-certs-714571 kubelet[735]: I1120 21:22:52.474072     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/74c07910-53db-450e-8569-ca8454ffb12f-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-km7xn\" (UID: \"74c07910-53db-450e-8569-ca8454ffb12f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-km7xn"
	Nov 20 21:22:52 embed-certs-714571 kubelet[735]: I1120 21:22:52.474097     735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/37045a96-fcea-45f3-a11b-712b1d99ad70-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zjxvl\" (UID: \"37045a96-fcea-45f3-a11b-712b1d99ad70\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl"
	Nov 20 21:22:55 embed-certs-714571 kubelet[735]: I1120 21:22:55.163299     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 21:22:56 embed-certs-714571 kubelet[735]: I1120 21:22:56.029388     735 scope.go:117] "RemoveContainer" containerID="279e1467f671ea0f2529198f8bf029057c91928c7081a424c05bedbdd993cf92"
	Nov 20 21:22:57 embed-certs-714571 kubelet[735]: I1120 21:22:57.034774     735 scope.go:117] "RemoveContainer" containerID="279e1467f671ea0f2529198f8bf029057c91928c7081a424c05bedbdd993cf92"
	Nov 20 21:22:57 embed-certs-714571 kubelet[735]: I1120 21:22:57.034936     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:22:57 embed-certs-714571 kubelet[735]: E1120 21:22:57.035158     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:22:58 embed-certs-714571 kubelet[735]: I1120 21:22:58.042654     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:22:58 embed-certs-714571 kubelet[735]: E1120 21:22:58.042858     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:22:59 embed-certs-714571 kubelet[735]: I1120 21:22:59.047051     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:22:59 embed-certs-714571 kubelet[735]: E1120 21:22:59.047273     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:22:59 embed-certs-714571 kubelet[735]: I1120 21:22:59.059492     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-km7xn" podStartSLOduration=0.870975383 podStartE2EDuration="7.059467766s" podCreationTimestamp="2025-11-20 21:22:52 +0000 UTC" firstStartedPulling="2025-11-20 21:22:52.741131595 +0000 UTC m=+6.856164465" lastFinishedPulling="2025-11-20 21:22:58.929623979 +0000 UTC m=+13.044656848" observedRunningTime="2025-11-20 21:22:59.059357823 +0000 UTC m=+13.174390700" watchObservedRunningTime="2025-11-20 21:22:59.059467766 +0000 UTC m=+13.174500645"
	Nov 20 21:23:10 embed-certs-714571 kubelet[735]: I1120 21:23:10.977884     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:23:12 embed-certs-714571 kubelet[735]: I1120 21:23:12.086632     735 scope.go:117] "RemoveContainer" containerID="eafc66ad6a1811cb01ba3d9431224a4eb460c21420035e71f240d0e5e8dfd997"
	Nov 20 21:23:12 embed-certs-714571 kubelet[735]: I1120 21:23:12.087016     735 scope.go:117] "RemoveContainer" containerID="0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	Nov 20 21:23:12 embed-certs-714571 kubelet[735]: E1120 21:23:12.087204     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:23:18 embed-certs-714571 kubelet[735]: I1120 21:23:18.682647     735 scope.go:117] "RemoveContainer" containerID="0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	Nov 20 21:23:18 embed-certs-714571 kubelet[735]: E1120 21:23:18.682895     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:23:29 embed-certs-714571 kubelet[735]: I1120 21:23:29.977506     735 scope.go:117] "RemoveContainer" containerID="0b66165ea5d62f913b4247199124a73a7108c2388c7eb2b7e8b6f0701c5cb6f4"
	Nov 20 21:23:29 embed-certs-714571 kubelet[735]: E1120 21:23:29.977738     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zjxvl_kubernetes-dashboard(37045a96-fcea-45f3-a11b-712b1d99ad70)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zjxvl" podUID="37045a96-fcea-45f3-a11b-712b1d99ad70"
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:23:38 embed-certs-714571 systemd[1]: kubelet.service: Consumed 1.783s CPU time.
	
	
	==> kubernetes-dashboard [369fc37c64c4ba9c20d07c36ceb3590cd05d4d24197202d054e34de8c1658b85] <==
	2025/11/20 21:22:59 Starting overwatch
	2025/11/20 21:22:59 Using namespace: kubernetes-dashboard
	2025/11/20 21:22:59 Using in-cluster config to connect to apiserver
	2025/11/20 21:22:59 Using secret token for csrf signing
	2025/11/20 21:22:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:22:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:22:59 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:22:59 Generating JWE encryption key
	2025/11/20 21:22:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:22:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:22:59 Initializing JWE encryption key from synchronized object
	2025/11/20 21:22:59 Creating in-cluster Sidecar client
	2025/11/20 21:22:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:22:59 Serving insecurely on HTTP port: 9090
	2025/11/20 21:23:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [24cb3553d837b9cd3b519b1d4abd2ebf5e29c85691820679474be2cb679bdd30] <==
	I1120 21:22:49.624314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:22:49.626328       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a23c77cfc3824223b4a55b4a5e91995ce7c373240ebbf5f42c31acc96ffafd62] <==
	W1120 21:23:19.755380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.759085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:21.765270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:23.772738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:23.780429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:25.783470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:25.787751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:27.790855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:27.797068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:29.800566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:29.805850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:31.809933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:31.813806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:33.817756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:33.822975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:35.826360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:35.830193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:37.833716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:37.838330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.842073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.846280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.849653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.854028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:43.856878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:43.862808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-714571 -n embed-certs-714571
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-714571 -n embed-certs-714571: exit status 2 (358.16593ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-714571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-454524 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-454524 --alsologtostderr -v=1: exit status 80 (1.890320272s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-454524 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:23:56.523645  582935 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:56.524057  582935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:56.524070  582935 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:56.524075  582935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:56.524405  582935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:56.524713  582935 out.go:368] Setting JSON to false
	I1120 21:23:56.524749  582935 mustload.go:66] Loading cluster: default-k8s-diff-port-454524
	I1120 21:23:56.525968  582935 config.go:182] Loaded profile config "default-k8s-diff-port-454524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:56.526880  582935 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-454524 --format={{.State.Status}}
	I1120 21:23:56.551877  582935 host.go:66] Checking if "default-k8s-diff-port-454524" exists ...
	I1120 21:23:56.552311  582935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:56.630940  582935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:56.618246662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:56.631648  582935 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-454524 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 21:23:56.633467  582935 out.go:179] * Pausing node default-k8s-diff-port-454524 ... 
	I1120 21:23:56.634720  582935 host.go:66] Checking if "default-k8s-diff-port-454524" exists ...
	I1120 21:23:56.635022  582935 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:56.635069  582935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-454524
	I1120 21:23:56.656570  582935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/default-k8s-diff-port-454524/id_rsa Username:docker}
	I1120 21:23:56.757779  582935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:56.782857  582935 pause.go:52] kubelet running: true
	I1120 21:23:56.782935  582935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:56.947566  582935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:56.947753  582935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:57.023201  582935 cri.go:89] found id: "75a7a0ef60fecf1571bf9f7857404111211f091c153d30f820fe0aea9f50fb6c"
	I1120 21:23:57.023236  582935 cri.go:89] found id: "7961048e0f06ba18db0fd4b69d46b8e5d7b30eeced91249265cb951ea3ac0b34"
	I1120 21:23:57.023243  582935 cri.go:89] found id: "11d9f9b8da1b13f9483c064d37db32796bf02190d94b45044437a213b52b737e"
	I1120 21:23:57.023248  582935 cri.go:89] found id: "69bf579c533fce3c38b538426bf2830c1ba9b8584c53e4f65be87a667ef0448c"
	I1120 21:23:57.023253  582935 cri.go:89] found id: "d3649474106a31c1b6ed18da94fdcf513679c355ee4944ce2226a39eb9456679"
	I1120 21:23:57.023259  582935 cri.go:89] found id: "3921f1915faef9c9893b50dc9abdbf0b0ffb04a39807d004316cbe5d73fe1e48"
	I1120 21:23:57.023263  582935 cri.go:89] found id: "5e958020a3930a138f25a5b87c0ccb52a3f362bfa85766cf949afb376899d198"
	I1120 21:23:57.023267  582935 cri.go:89] found id: "32e30730c959d73d9d8630bb246958dd9ab048e29f3ecc9af6cec8ea4ffc721e"
	I1120 21:23:57.023270  582935 cri.go:89] found id: "5d74d09802c7420ec57c45ee42a2fca9c71a78fa136e9fce50b2eaf269d99c74"
	I1120 21:23:57.023307  582935 cri.go:89] found id: "808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	I1120 21:23:57.023319  582935 cri.go:89] found id: "0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b"
	I1120 21:23:57.023321  582935 cri.go:89] found id: ""
	I1120 21:23:57.023371  582935 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:57.035591  582935 retry.go:31] will retry after 350.228233ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:57Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:57.386131  582935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:57.400559  582935 pause.go:52] kubelet running: false
	I1120 21:23:57.400623  582935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:57.555173  582935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:57.555299  582935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:57.624344  582935 cri.go:89] found id: "75a7a0ef60fecf1571bf9f7857404111211f091c153d30f820fe0aea9f50fb6c"
	I1120 21:23:57.624376  582935 cri.go:89] found id: "7961048e0f06ba18db0fd4b69d46b8e5d7b30eeced91249265cb951ea3ac0b34"
	I1120 21:23:57.624381  582935 cri.go:89] found id: "11d9f9b8da1b13f9483c064d37db32796bf02190d94b45044437a213b52b737e"
	I1120 21:23:57.624384  582935 cri.go:89] found id: "69bf579c533fce3c38b538426bf2830c1ba9b8584c53e4f65be87a667ef0448c"
	I1120 21:23:57.624387  582935 cri.go:89] found id: "d3649474106a31c1b6ed18da94fdcf513679c355ee4944ce2226a39eb9456679"
	I1120 21:23:57.624390  582935 cri.go:89] found id: "3921f1915faef9c9893b50dc9abdbf0b0ffb04a39807d004316cbe5d73fe1e48"
	I1120 21:23:57.624393  582935 cri.go:89] found id: "5e958020a3930a138f25a5b87c0ccb52a3f362bfa85766cf949afb376899d198"
	I1120 21:23:57.624395  582935 cri.go:89] found id: "32e30730c959d73d9d8630bb246958dd9ab048e29f3ecc9af6cec8ea4ffc721e"
	I1120 21:23:57.624398  582935 cri.go:89] found id: "5d74d09802c7420ec57c45ee42a2fca9c71a78fa136e9fce50b2eaf269d99c74"
	I1120 21:23:57.624405  582935 cri.go:89] found id: "808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	I1120 21:23:57.624408  582935 cri.go:89] found id: "0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b"
	I1120 21:23:57.624420  582935 cri.go:89] found id: ""
	I1120 21:23:57.624485  582935 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:57.636901  582935 retry.go:31] will retry after 443.370466ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:57Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:58.081396  582935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:58.095517  582935 pause.go:52] kubelet running: false
	I1120 21:23:58.095576  582935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:58.253178  582935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:58.253326  582935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:58.325999  582935 cri.go:89] found id: "75a7a0ef60fecf1571bf9f7857404111211f091c153d30f820fe0aea9f50fb6c"
	I1120 21:23:58.326029  582935 cri.go:89] found id: "7961048e0f06ba18db0fd4b69d46b8e5d7b30eeced91249265cb951ea3ac0b34"
	I1120 21:23:58.326037  582935 cri.go:89] found id: "11d9f9b8da1b13f9483c064d37db32796bf02190d94b45044437a213b52b737e"
	I1120 21:23:58.326042  582935 cri.go:89] found id: "69bf579c533fce3c38b538426bf2830c1ba9b8584c53e4f65be87a667ef0448c"
	I1120 21:23:58.326045  582935 cri.go:89] found id: "d3649474106a31c1b6ed18da94fdcf513679c355ee4944ce2226a39eb9456679"
	I1120 21:23:58.326060  582935 cri.go:89] found id: "3921f1915faef9c9893b50dc9abdbf0b0ffb04a39807d004316cbe5d73fe1e48"
	I1120 21:23:58.326064  582935 cri.go:89] found id: "5e958020a3930a138f25a5b87c0ccb52a3f362bfa85766cf949afb376899d198"
	I1120 21:23:58.326069  582935 cri.go:89] found id: "32e30730c959d73d9d8630bb246958dd9ab048e29f3ecc9af6cec8ea4ffc721e"
	I1120 21:23:58.326073  582935 cri.go:89] found id: "5d74d09802c7420ec57c45ee42a2fca9c71a78fa136e9fce50b2eaf269d99c74"
	I1120 21:23:58.326081  582935 cri.go:89] found id: "808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	I1120 21:23:58.326086  582935 cri.go:89] found id: "0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b"
	I1120 21:23:58.326090  582935 cri.go:89] found id: ""
	I1120 21:23:58.326142  582935 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:58.343621  582935 out.go:203] 
	W1120 21:23:58.344794  582935 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:23:58.344820  582935 out.go:285] * 
	* 
	W1120 21:23:58.349507  582935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:23:58.350952  582935 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-454524 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-454524
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-454524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b",
	        "Created": "2025-11-20T21:21:50.606943325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 567738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:22:54.605116382Z",
	            "FinishedAt": "2025-11-20T21:22:53.294117384Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/hosts",
	        "LogPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b-json.log",
	        "Name": "/default-k8s-diff-port-454524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-454524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-454524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b",
	                "LowerDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-454524",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-454524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-454524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-454524",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-454524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2ef5c21a87f335b2c5e0ef8c685a6063d8f8c73f5c4db90fefaeddd9e1e62a0",
	            "SandboxKey": "/var/run/docker/netns/a2ef5c21a87f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-454524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a91837c366fb15344a0e0b6f73e85038ca163d1eb2c31d15bcf6f3ca26f3d04",
	                    "EndpointID": "e579203360455473330c2b6d057f0094d2bf49c60bcdf518ed413e0e36851f1a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:14:96:8a:7c:1c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-454524",
	                        "c409d5fe70c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524: exit status 2 (370.581781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-454524 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-454524 logs -n 25: (1.201550469s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ stop    │ -p newest-cni-678421 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ embed-certs-714571 image list --format=json                                                                                                                                                                                                   │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p embed-certs-714571 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p newest-cni-678421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-454524 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ newest-cni-678421 image list --format=json                                                                                                                                                                                                    │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-454524 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ pause   │ -p newest-cni-678421 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:46.048493  580632 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:46.048745  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048754  580632 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:46.048757  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048979  580632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:46.049453  580632 out.go:368] Setting JSON to false
	I1120 21:23:46.050631  580632 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14768,"bootTime":1763659058,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:46.050732  580632 start.go:143] virtualization: kvm guest
	I1120 21:23:46.052729  580632 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:46.054001  580632 notify.go:221] Checking for updates...
	I1120 21:23:46.054037  580632 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:46.055262  580632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:46.056552  580632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:46.057861  580632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:46.058912  580632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:46.060118  580632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:46.061652  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:46.062194  580632 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:46.086682  580632 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:46.086778  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.150294  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.138396042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.150451  580632 docker.go:319] overlay module found
	I1120 21:23:46.152137  580632 out.go:179] * Using the docker driver based on existing profile
	I1120 21:23:46.153355  580632 start.go:309] selected driver: docker
	I1120 21:23:46.153372  580632 start.go:930] validating driver "docker" against &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.153484  580632 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:46.154208  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.217838  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.20805981 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.218634  580632 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:46.218693  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:46.218746  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:46.218816  580632 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.221718  580632 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:46.223273  580632 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:46.224323  580632 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:46.225587  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:46.225618  580632 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:46.225634  580632 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:46.225700  580632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:46.225713  580632 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:46.225745  580632 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:46.225840  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.250485  580632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:46.250505  580632 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:46.250520  580632 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:46.250545  580632 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:46.250600  580632 start.go:364] duration metric: took 36.944µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:46.250616  580632 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:23:46.250624  580632 fix.go:54] fixHost starting: 
	I1120 21:23:46.250818  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.271749  580632 fix.go:112] recreateIfNeeded on newest-cni-678421: state=Stopped err=<nil>
	W1120 21:23:46.271804  580632 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:23:46.273510  580632 out.go:252] * Restarting existing docker container for "newest-cni-678421" ...
	I1120 21:23:46.273588  580632 cli_runner.go:164] Run: docker start newest-cni-678421
	I1120 21:23:46.640583  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.661704  580632 kic.go:430] container "newest-cni-678421" state is running.
	I1120 21:23:46.662149  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:46.686053  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.686348  580632 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:46.686428  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:46.706344  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:46.706819  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:46.706846  580632 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:46.707727  580632 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56362->127.0.0.1:33133: read: connection reset by peer
	I1120 21:23:49.843845  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:49.843888  580632 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:49.843955  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:49.863206  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:49.863522  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:49.863542  580632 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:50.004857  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:50.004940  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.023923  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.024145  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.024162  580632 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:50.156143  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:50.156182  580632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:50.156257  580632 ubuntu.go:190] setting up certificates
	I1120 21:23:50.156270  580632 provision.go:84] configureAuth start
	I1120 21:23:50.156339  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.176257  580632 provision.go:143] copyHostCerts
	I1120 21:23:50.176333  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:50.176355  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:50.176432  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:50.176553  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:50.176566  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:50.176606  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:50.176690  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:50.176700  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:50.176737  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:50.176809  580632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:50.229409  580632 provision.go:177] copyRemoteCerts
	I1120 21:23:50.229481  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:50.229536  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.248655  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.344789  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:50.363151  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:50.381153  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:50.399055  580632 provision.go:87] duration metric: took 242.768844ms to configureAuth
	I1120 21:23:50.399082  580632 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:50.399272  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:50.399375  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.418619  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.418835  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.418850  580632 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:50.711816  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:50.711848  580632 machine.go:97] duration metric: took 4.025481618s to provisionDockerMachine
	I1120 21:23:50.711864  580632 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:50.711878  580632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:50.711941  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:50.711982  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.732036  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.829787  580632 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:50.833560  580632 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:50.833616  580632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:50.833627  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:50.833705  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:50.833835  580632 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:50.833980  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:50.842564  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:50.861253  580632 start.go:296] duration metric: took 149.369694ms for postStartSetup
	I1120 21:23:50.861339  580632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:50.861377  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.880051  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.974208  580632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:50.979126  580632 fix.go:56] duration metric: took 4.728491713s for fixHost
	I1120 21:23:50.979158  580632 start.go:83] releasing machines lock for "newest-cni-678421", held for 4.728546595s
	I1120 21:23:50.979256  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.998093  580632 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:50.998117  580632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:50.998142  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.998179  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:51.019563  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.019937  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.111789  580632 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:51.174631  580632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:51.210913  580632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:51.216140  580632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:51.216212  580632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:51.225258  580632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:23:51.225285  580632 start.go:496] detecting cgroup driver to use...
	I1120 21:23:51.225322  580632 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:51.225373  580632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:51.239684  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:51.252817  580632 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:51.252873  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:51.267677  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:51.280313  580632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:51.359820  580632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:51.440243  580632 docker.go:234] disabling docker service ...
	I1120 21:23:51.440315  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:51.455600  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:51.468814  580632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:51.549991  580632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:51.639411  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:51.653330  580632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:51.668426  580632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:51.668496  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.678387  580632 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:51.678448  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.687514  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.696617  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.705907  580632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:51.714416  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.724299  580632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.733643  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.743143  580632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:51.751288  580632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:51.758956  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:51.839688  580632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:51.991719  580632 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:51.991791  580632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:51.995963  580632 start.go:564] Will wait 60s for crictl version
	I1120 21:23:51.996011  580632 ssh_runner.go:195] Run: which crictl
	I1120 21:23:51.999596  580632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:52.025769  580632 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:52.025844  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.055148  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.087414  580632 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:52.088512  580632 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:52.106859  580632 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:52.111317  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.124544  580632 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 21:23:52.125757  580632 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:52.125892  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:52.125953  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.159731  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.159752  580632 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:52.159798  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.187161  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.187185  580632 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:52.187193  580632 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:52.187306  580632 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:52.187376  580632 ssh_runner.go:195] Run: crio config
	I1120 21:23:52.235170  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:52.235200  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:52.235246  580632 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:52.235280  580632 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:52.235426  580632 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:52.235503  580632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:52.243927  580632 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:52.244009  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:52.252390  580632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:52.265368  580632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:52.278329  580632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:52.292057  580632 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:52.296001  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.306383  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:52.386821  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:52.412054  580632 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:52.412083  580632 certs.go:195] generating shared ca certs ...
	I1120 21:23:52.412101  580632 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:52.412365  580632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:52.412416  580632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:52.412425  580632 certs.go:257] generating profile certs ...
	I1120 21:23:52.412506  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:52.412557  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:52.412600  580632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:52.412708  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:52.412737  580632 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:52.412744  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:52.412764  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:52.412789  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:52.412810  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:52.412858  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:52.413501  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:52.433062  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:52.455200  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:52.474785  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:52.498862  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:52.517609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:52.536537  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:52.554621  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:52.573916  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:52.592807  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:52.612609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:52.631336  580632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:52.644868  580632 ssh_runner.go:195] Run: openssl version
	I1120 21:23:52.651511  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.659232  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:52.666879  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670902  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670977  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.706119  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:52.714315  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.722433  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:52.730632  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734534  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734604  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.769094  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:52.777452  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.785374  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:52.793202  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797062  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797113  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.832297  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:52.840467  580632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:52.844760  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:23:52.879344  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:23:52.914405  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:23:52.957573  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:23:53.002001  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:23:53.059315  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:23:53.118112  580632 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:53.118249  580632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:53.118315  580632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:53.155937  580632 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:53.155964  580632 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:53.155969  580632 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:53.155973  580632 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:53.155977  580632 cri.go:89] found id: ""
	I1120 21:23:53.156028  580632 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:23:53.169575  580632 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:53.169666  580632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:53.178572  580632 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:23:53.178598  580632 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:23:53.178648  580632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:23:53.186477  580632 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:23:53.187131  580632 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-678421" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.187455  580632 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-678421" cluster setting kubeconfig missing "newest-cni-678421" context setting]
	I1120 21:23:53.188141  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.189852  580632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:23:53.197764  580632 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1120 21:23:53.197798  580632 kubeadm.go:602] duration metric: took 19.193608ms to restartPrimaryControlPlane
	I1120 21:23:53.197808  580632 kubeadm.go:403] duration metric: took 79.708097ms to StartCluster
	I1120 21:23:53.197825  580632 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.197892  580632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.199030  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.199301  580632 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:53.199413  580632 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:53.199502  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:53.199513  580632 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:53.199538  580632 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	W1120 21:23:53.199549  580632 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:23:53.199556  580632 addons.go:70] Setting dashboard=true in profile "newest-cni-678421"
	I1120 21:23:53.199565  580632 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:53.199573  580632 addons.go:239] Setting addon dashboard=true in "newest-cni-678421"
	I1120 21:23:53.199579  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	W1120 21:23:53.199581  580632 addons.go:248] addon dashboard should already be in state true
	I1120 21:23:53.199581  580632 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:53.199609  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.199914  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200071  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200090  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.201838  580632 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:53.203715  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:53.227682  580632 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	W1120 21:23:53.227708  580632 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:23:53.227739  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.228202  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.228303  580632 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:53.228953  580632 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 21:23:53.229725  580632 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.229745  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:53.229800  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.231712  580632 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 21:23:53.232850  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 21:23:53.232872  580632 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 21:23:53.232947  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.265766  580632 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.265850  580632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:53.265935  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.266314  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.268641  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.294295  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.343706  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:53.357884  580632 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:53.357961  580632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:53.370517  580632 api_server.go:72] duration metric: took 171.180002ms to wait for apiserver process to appear ...
	I1120 21:23:53.370547  580632 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:53.370574  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:53.384995  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 21:23:53.385021  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 21:23:53.387564  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.400161  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 21:23:53.400191  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 21:23:53.410438  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.417003  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 21:23:53.417034  580632 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 21:23:53.431937  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 21:23:53.431967  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 21:23:53.449462  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 21:23:53.449491  580632 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 21:23:53.468486  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 21:23:53.468515  580632 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 21:23:53.485129  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 21:23:53.485160  580632 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 21:23:53.498138  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 21:23:53.498163  580632 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 21:23:53.513999  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:53.514025  580632 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 21:23:53.528983  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:54.509181  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.509231  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.509249  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.515250  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.515284  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.871126  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.876266  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:54.876293  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.038413  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650810705s)
	I1120 21:23:55.038453  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.627983684s)
	I1120 21:23:55.038566  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.509543165s)
	I1120 21:23:55.040585  580632 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-678421 addons enable metrics-server
	
	I1120 21:23:55.050757  580632 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 21:23:55.052024  580632 addons.go:515] duration metric: took 1.852618686s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:55.370859  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.375402  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:55.375429  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.871078  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.875821  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:55.876995  580632 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:55.877022  580632 api_server.go:131] duration metric: took 2.506467275s to wait for apiserver health ...
	I1120 21:23:55.877035  580632 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:55.881011  580632 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:55.881052  580632 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881064  580632 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:55.881076  580632 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:55.881086  580632 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:55.881102  580632 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:55.881111  580632 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:55.881120  580632 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:55.881127  580632 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881138  580632 system_pods.go:74] duration metric: took 4.09635ms to wait for pod list to return data ...
	I1120 21:23:55.881153  580632 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:55.883836  580632 default_sa.go:45] found service account: "default"
	I1120 21:23:55.883863  580632 default_sa.go:55] duration metric: took 2.701397ms for default service account to be created ...
	I1120 21:23:55.883875  580632 kubeadm.go:587] duration metric: took 2.684545859s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:55.883891  580632 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:55.886610  580632 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:55.886636  580632 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:55.886650  580632 node_conditions.go:105] duration metric: took 2.75414ms to run NodePressure ...
	I1120 21:23:55.886662  580632 start.go:242] waiting for startup goroutines ...
	I1120 21:23:55.886668  580632 start.go:247] waiting for cluster config update ...
	I1120 21:23:55.886679  580632 start.go:256] writing updated cluster config ...
	I1120 21:23:55.886967  580632 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:55.937055  580632 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:55.938658  580632 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:23:15 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:15.820230555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:23:15 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:15.824587719Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:23:15 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:15.824619956Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.59969419Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=85a8f4e4-0936-45b5-8a31-b9e3e63051ea name=/runtime.v1.ImageService/PullImage
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.600360469Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b6201472-d02c-48f0-8c6b-02bd1af63bf9 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.602027468Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7e2e7198-9148-4a66-855d-00566e5d14b5 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.607131811Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr/kubernetes-dashboard" id=01391e26-084d-4a49-9928-3980dcb9a3f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.607309968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.61239745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.612591434Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3a1e6d6cfbf6bede239bc25f0beba606bb85c7f5b19d21c87e1563235a4c0094/merged/etc/group: no such file or directory"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.612927305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.649575939Z" level=info msg="Created container 0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr/kubernetes-dashboard" id=01391e26-084d-4a49-9928-3980dcb9a3f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.650299245Z" level=info msg="Starting container: 0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b" id=3f69523a-5c5a-43ea-8d82-5f54439846e2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.652716328Z" level=info msg="Started container" PID=1756 containerID=0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr/kubernetes-dashboard id=3f69523a-5c5a-43ea-8d82-5f54439846e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa49cdc27d26a44b148fe3bd862daec3ea229ffd8417f4a126b694e0bcf00bb3
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.986864781Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a9cf9065-6a97-48bf-a840-d0991c54633d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.989563691Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=493be92f-0214-463c-b6ab-9a8f242f81be name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.992687288Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper" id=2e5751b7-00cf-4e3d-b865-5849461ca8ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.992832992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.999102685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.999610514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.035053642Z" level=info msg="Created container 808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper" id=2e5751b7-00cf-4e3d-b865-5849461ca8ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.035754337Z" level=info msg="Starting container: 808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea" id=38369ab9-534e-479f-a386-5be0dc3cd8f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.038042686Z" level=info msg="Started container" PID=1778 containerID=808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper id=38369ab9-534e-479f-a386-5be0dc3cd8f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dec3df1c2092d4318628aa05e610515dbe1f1d6000c71bf0e617ed1d4497f7f3
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.111752232Z" level=info msg="Removing container: aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a" id=309f4846-325a-449b-b513-e75159b90282 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.122576973Z" level=info msg="Removed container aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper" id=309f4846-325a-449b-b513-e75159b90282 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	808bc7edcf5af       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   dec3df1c2092d       dashboard-metrics-scraper-6ffb444bf9-pngff             kubernetes-dashboard
	0306081c17872       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   aa49cdc27d26a       kubernetes-dashboard-855c9754f9-psntr                  kubernetes-dashboard
	75a7a0ef60fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   f708b1045bc5f       storage-provisioner                                    kube-system
	14b6e98fbad5f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   2c8fa22a310bb       busybox                                                default
	7961048e0f06b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   4ad7cf83e43c5       coredns-66bc5c9577-zkl9z                               kube-system
	11d9f9b8da1b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   174f4cb78b9ff       kindnet-clzlq                                          kube-system
	69bf579c533fc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   3f4b2b5975538       kube-proxy-fpnmp                                       kube-system
	d3649474106a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   f708b1045bc5f       storage-provisioner                                    kube-system
	3921f1915faef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   461ccc940f435       etcd-default-k8s-diff-port-454524                      kube-system
	5e958020a3930       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   04de9370ea6bf       kube-apiserver-default-k8s-diff-port-454524            kube-system
	32e30730c959d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   554f8b08fa997       kube-controller-manager-default-k8s-diff-port-454524   kube-system
	5d74d09802c74       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   a2310ba020988       kube-scheduler-default-k8s-diff-port-454524            kube-system
	
	
	==> coredns [7961048e0f06ba18db0fd4b69d46b8e5d7b30eeced91249265cb951ea3ac0b34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51390 - 58663 "HINFO IN 1395662525658519895.2157227740901871941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060849126s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-454524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-454524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-454524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-454524
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-454524
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                5a173afd-5240-460c-a507-61495be2fab4
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-zkl9z                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-454524                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-clzlq                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-454524             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-454524    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-fpnmp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-454524             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pngff              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-psntr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node default-k8s-diff-port-454524 event: Registered Node default-k8s-diff-port-454524 in Controller
	  Normal  NodeReady                96s                  kubelet          Node default-k8s-diff-port-454524 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)    kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)    kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)    kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node default-k8s-diff-port-454524 event: Registered Node default-k8s-diff-port-454524 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [3921f1915faef9c9893b50dc9abdbf0b0ffb04a39807d004316cbe5d73fe1e48] <==
	{"level":"warn","ts":"2025-11-20T21:23:03.865528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.872328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.878927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.888751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.895793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.902959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.909188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.915590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.923387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.929397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.936277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.951441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.957870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.965285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.972817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.980883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.989095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.997404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.008526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.016292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.022715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.029328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.044823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.051247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.109495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41162","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:23:59 up  4:06,  0 user,  load average: 3.42, 4.39, 2.94
	Linux default-k8s-diff-port-454524 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11d9f9b8da1b13f9483c064d37db32796bf02190d94b45044437a213b52b737e] <==
	podIP = 192.168.85.2
	I1120 21:23:05.472981       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:23:05.472999       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:23:05.473020       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:23:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:23:05.771809       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:23:05.772008       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:23:05.772050       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:23:05.772313       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:23:05.772796       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 21:23:05.772802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 21:23:05.868206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1120 21:23:07.272336       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:23:07.272375       1 metrics.go:72] Registering metrics
	I1120 21:23:07.272466       1 controller.go:711] "Syncing nftables rules"
	I1120 21:23:15.771717       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:15.771784       1 main.go:301] handling current node
	I1120 21:23:25.778300       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:25.778333       1 main.go:301] handling current node
	I1120 21:23:35.771979       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:35.772009       1 main.go:301] handling current node
	I1120 21:23:45.773242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:45.773286       1 main.go:301] handling current node
	I1120 21:23:55.772395       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:55.772440       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e958020a3930a138f25a5b87c0ccb52a3f362bfa85766cf949afb376899d198] <==
	I1120 21:23:04.626756       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:23:04.626765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:23:04.626774       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:23:04.627425       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 21:23:04.628648       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 21:23:04.628650       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:23:04.628939       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:23:04.643356       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1120 21:23:04.650775       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 21:23:04.675654       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:23:04.684209       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:23:04.684329       1 policy_source.go:240] refreshing policies
	I1120 21:23:04.707511       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:23:04.929128       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:23:04.964123       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:23:04.985484       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:23:05.000476       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:23:05.012592       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:23:05.070592       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.76.198"}
	I1120 21:23:05.082355       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.177.9"}
	I1120 21:23:05.526489       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:23:08.129548       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:23:08.477477       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:23:08.626557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:23:08.626557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [32e30730c959d73d9d8630bb246958dd9ab048e29f3ecc9af6cec8ea4ffc721e] <==
	I1120 21:23:08.018913       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:23:08.021151       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:23:08.023955       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:23:08.023972       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:23:08.024003       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:23:08.024024       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:23:08.024151       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:23:08.024162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:23:08.024141       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:23:08.024168       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:23:08.024183       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:23:08.024342       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-454524"
	I1120 21:23:08.024423       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:23:08.024207       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:23:08.024206       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:23:08.024819       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:23:08.025346       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:23:08.026548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:23:08.027778       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:23:08.027793       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:23:08.027799       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:23:08.030282       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:23:08.032784       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:23:08.049389       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:23:08.050372       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [69bf579c533fce3c38b538426bf2830c1ba9b8584c53e4f65be87a667ef0448c] <==
	I1120 21:23:05.409135       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:23:05.472408       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:23:05.573183       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:23:05.573231       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:23:05.573333       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:23:05.596472       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:23:05.596555       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:23:05.603731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:23:05.604238       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:23:05.604282       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:05.606996       1 config.go:200] "Starting service config controller"
	I1120 21:23:05.607020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:23:05.607043       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:23:05.607049       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:23:05.607065       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:23:05.607071       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:23:05.607488       1 config.go:309] "Starting node config controller"
	I1120 21:23:05.607519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:23:05.707171       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:23:05.707201       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:23:05.707179       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:23:05.707854       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5d74d09802c7420ec57c45ee42a2fca9c71a78fa136e9fce50b2eaf269d99c74] <==
	I1120 21:23:03.153741       1 serving.go:386] Generated self-signed cert in-memory
	W1120 21:23:04.598693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 21:23:04.598734       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:23:04.598748       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 21:23:04.598765       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 21:23:04.640334       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:23:04.640369       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:04.643331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:04.643442       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:04.644273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:23:04.644345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:23:04.743726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:23:08 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:08.565870     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtr4l\" (UniqueName: \"kubernetes.io/projected/8811704a-85d6-4e90-adb4-08f581d9ade6-kube-api-access-mtr4l\") pod \"dashboard-metrics-scraper-6ffb444bf9-pngff\" (UID: \"8811704a-85d6-4e90-adb4-08f581d9ade6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff"
	Nov 20 21:23:08 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:08.565947     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b6ddf9d-f15b-465a-89af-d622cce06e01-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-psntr\" (UID: \"7b6ddf9d-f15b-465a-89af-d622cce06e01\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr"
	Nov 20 21:23:13 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:13.055126     730 scope.go:117] "RemoveContainer" containerID="56587bcbccaca9f35b3d77e91bc55473963ac20900f4d92c72e1f4c5ae224758"
	Nov 20 21:23:13 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:13.105754     730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 21:23:14 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:14.060361     730 scope.go:117] "RemoveContainer" containerID="56587bcbccaca9f35b3d77e91bc55473963ac20900f4d92c72e1f4c5ae224758"
	Nov 20 21:23:14 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:14.060430     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:14 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:14.060588     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:15 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:15.065015     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:15 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:15.065695     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:16 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:16.068093     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:16 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:16.068341     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:17 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:17.084571     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr" podStartSLOduration=1.272838529 podStartE2EDuration="9.084547587s" podCreationTimestamp="2025-11-20 21:23:08 +0000 UTC" firstStartedPulling="2025-11-20 21:23:08.789677949 +0000 UTC m=+6.914335266" lastFinishedPulling="2025-11-20 21:23:16.601386995 +0000 UTC m=+14.726044324" observedRunningTime="2025-11-20 21:23:17.084511867 +0000 UTC m=+15.209169205" watchObservedRunningTime="2025-11-20 21:23:17.084547587 +0000 UTC m=+15.209204924"
	Nov 20 21:23:28 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:28.986341     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:29 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:29.110293     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:29 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:29.110536     730 scope.go:117] "RemoveContainer" containerID="808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	Nov 20 21:23:29 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:29.110797     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:34 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:34.204688     730 scope.go:117] "RemoveContainer" containerID="808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	Nov 20 21:23:34 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:34.204983     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:48 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:48.985680     730 scope.go:117] "RemoveContainer" containerID="808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	Nov 20 21:23:48 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:48.985870     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:56 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:56.935229     730 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: kubelet.service: Consumed 1.897s CPU time.
	
	
	==> kubernetes-dashboard [0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b] <==
	2025/11/20 21:23:16 Using namespace: kubernetes-dashboard
	2025/11/20 21:23:16 Using in-cluster config to connect to apiserver
	2025/11/20 21:23:16 Using secret token for csrf signing
	2025/11/20 21:23:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:23:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:23:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:23:16 Generating JWE encryption key
	2025/11/20 21:23:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:23:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:23:16 Initializing JWE encryption key from synchronized object
	2025/11/20 21:23:16 Creating in-cluster Sidecar client
	2025/11/20 21:23:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:23:16 Serving insecurely on HTTP port: 9090
	2025/11/20 21:23:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:23:16 Starting overwatch
	
	
	==> storage-provisioner [75a7a0ef60fecf1571bf9f7857404111211f091c153d30f820fe0aea9f50fb6c] <==
	W1120 21:23:35.588256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:37.592026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:37.596566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.599375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.604758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.607635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.611976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:43.614743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:43.619684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:45.623381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:45.627375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:47.630587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:47.634305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:49.637183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:49.643172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:51.646321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:51.650682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:53.654364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:53.658476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:55.661707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:55.666355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:57.670504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:57.674187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:59.677612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:59.681987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d3649474106a31c1b6ed18da94fdcf513679c355ee4944ce2226a39eb9456679] <==
	I1120 21:23:05.372787       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:23:05.374361       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524: exit status 2 (368.358497ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-454524
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-454524:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b",
	        "Created": "2025-11-20T21:21:50.606943325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 567738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:22:54.605116382Z",
	            "FinishedAt": "2025-11-20T21:22:53.294117384Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/hosts",
	        "LogPath": "/var/lib/docker/containers/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b/c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b-json.log",
	        "Name": "/default-k8s-diff-port-454524",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-454524:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-454524",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c409d5fe70c10e9968c57e783ba3069136b305cf43c1f71e8d3b7fef9421644b",
	                "LowerDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3a5d2c0ebbc324ab018cc5b3d9a9fbab373922e1e7608d14e678e5936436f44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-454524",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-454524/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-454524",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-454524",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-454524",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2ef5c21a87f335b2c5e0ef8c685a6063d8f8c73f5c4db90fefaeddd9e1e62a0",
	            "SandboxKey": "/var/run/docker/netns/a2ef5c21a87f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-454524": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a91837c366fb15344a0e0b6f73e85038ca163d1eb2c31d15bcf6f3ca26f3d04",
	                    "EndpointID": "e579203360455473330c2b6d057f0094d2bf49c60bcdf518ed413e0e36851f1a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:14:96:8a:7c:1c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-454524",
	                        "c409d5fe70c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524: exit status 2 (380.655088ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-454524 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-454524 logs -n 25: (1.183228146s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ stop    │ -p newest-cni-678421 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ embed-certs-714571 image list --format=json                                                                                                                                                                                                   │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p embed-certs-714571 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p newest-cni-678421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-454524 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ newest-cni-678421 image list --format=json                                                                                                                                                                                                    │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-454524 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ pause   │ -p newest-cni-678421 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:46.048493  580632 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:46.048745  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048754  580632 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:46.048757  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048979  580632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:46.049453  580632 out.go:368] Setting JSON to false
	I1120 21:23:46.050631  580632 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14768,"bootTime":1763659058,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:46.050732  580632 start.go:143] virtualization: kvm guest
	I1120 21:23:46.052729  580632 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:46.054001  580632 notify.go:221] Checking for updates...
	I1120 21:23:46.054037  580632 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:46.055262  580632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:46.056552  580632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:46.057861  580632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:46.058912  580632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:46.060118  580632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:46.061652  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:46.062194  580632 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:46.086682  580632 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:46.086778  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.150294  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.138396042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.150451  580632 docker.go:319] overlay module found
	I1120 21:23:46.152137  580632 out.go:179] * Using the docker driver based on existing profile
	I1120 21:23:46.153355  580632 start.go:309] selected driver: docker
	I1120 21:23:46.153372  580632 start.go:930] validating driver "docker" against &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.153484  580632 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:46.154208  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.217838  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.20805981 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.218634  580632 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:46.218693  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:46.218746  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:46.218816  580632 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.221718  580632 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:46.223273  580632 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:46.224323  580632 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:46.225587  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:46.225618  580632 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:46.225634  580632 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:46.225700  580632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:46.225713  580632 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:46.225745  580632 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:46.225840  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.250485  580632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:46.250505  580632 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:46.250520  580632 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:46.250545  580632 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:46.250600  580632 start.go:364] duration metric: took 36.944µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:46.250616  580632 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:23:46.250624  580632 fix.go:54] fixHost starting: 
	I1120 21:23:46.250818  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.271749  580632 fix.go:112] recreateIfNeeded on newest-cni-678421: state=Stopped err=<nil>
	W1120 21:23:46.271804  580632 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:23:46.273510  580632 out.go:252] * Restarting existing docker container for "newest-cni-678421" ...
	I1120 21:23:46.273588  580632 cli_runner.go:164] Run: docker start newest-cni-678421
	I1120 21:23:46.640583  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.661704  580632 kic.go:430] container "newest-cni-678421" state is running.
	I1120 21:23:46.662149  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:46.686053  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.686348  580632 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:46.686428  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:46.706344  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:46.706819  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:46.706846  580632 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:46.707727  580632 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56362->127.0.0.1:33133: read: connection reset by peer
	I1120 21:23:49.843845  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:49.843888  580632 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:49.843955  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:49.863206  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:49.863522  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:49.863542  580632 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:50.004857  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:50.004940  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.023923  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.024145  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.024162  580632 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:50.156143  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:50.156182  580632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:50.156257  580632 ubuntu.go:190] setting up certificates
	I1120 21:23:50.156270  580632 provision.go:84] configureAuth start
	I1120 21:23:50.156339  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.176257  580632 provision.go:143] copyHostCerts
	I1120 21:23:50.176333  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:50.176355  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:50.176432  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:50.176553  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:50.176566  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:50.176606  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:50.176690  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:50.176700  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:50.176737  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:50.176809  580632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:50.229409  580632 provision.go:177] copyRemoteCerts
	I1120 21:23:50.229481  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:50.229536  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.248655  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.344789  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:50.363151  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:50.381153  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:50.399055  580632 provision.go:87] duration metric: took 242.768844ms to configureAuth
	I1120 21:23:50.399082  580632 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:50.399272  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:50.399375  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.418619  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.418835  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.418850  580632 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:50.711816  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:50.711848  580632 machine.go:97] duration metric: took 4.025481618s to provisionDockerMachine
	I1120 21:23:50.711864  580632 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:50.711878  580632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:50.711941  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:50.711982  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.732036  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.829787  580632 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:50.833560  580632 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:50.833616  580632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:50.833627  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:50.833705  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:50.833835  580632 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:50.833980  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:50.842564  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:50.861253  580632 start.go:296] duration metric: took 149.369694ms for postStartSetup
	I1120 21:23:50.861339  580632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:50.861377  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.880051  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.974208  580632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:50.979126  580632 fix.go:56] duration metric: took 4.728491713s for fixHost
	I1120 21:23:50.979158  580632 start.go:83] releasing machines lock for "newest-cni-678421", held for 4.728546595s
	I1120 21:23:50.979256  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.998093  580632 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:50.998117  580632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:50.998142  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.998179  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:51.019563  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.019937  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.111789  580632 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:51.174631  580632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:51.210913  580632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:51.216140  580632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:51.216212  580632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:51.225258  580632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:23:51.225285  580632 start.go:496] detecting cgroup driver to use...
	I1120 21:23:51.225322  580632 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:51.225373  580632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:51.239684  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:51.252817  580632 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:51.252873  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:51.267677  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:51.280313  580632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:51.359820  580632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:51.440243  580632 docker.go:234] disabling docker service ...
	I1120 21:23:51.440315  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:51.455600  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:51.468814  580632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:51.549991  580632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:51.639411  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:51.653330  580632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:51.668426  580632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:51.668496  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.678387  580632 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:51.678448  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.687514  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.696617  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.705907  580632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:51.714416  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.724299  580632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.733643  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.743143  580632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:51.751288  580632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:51.758956  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:51.839688  580632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:51.991719  580632 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:51.991791  580632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:51.995963  580632 start.go:564] Will wait 60s for crictl version
	I1120 21:23:51.996011  580632 ssh_runner.go:195] Run: which crictl
	I1120 21:23:51.999596  580632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:52.025769  580632 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:52.025844  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.055148  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.087414  580632 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:52.088512  580632 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:52.106859  580632 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:52.111317  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.124544  580632 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 21:23:52.125757  580632 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:52.125892  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:52.125953  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.159731  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.159752  580632 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:52.159798  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.187161  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.187185  580632 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:52.187193  580632 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:52.187306  580632 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:52.187376  580632 ssh_runner.go:195] Run: crio config
	I1120 21:23:52.235170  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:52.235200  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:52.235246  580632 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:52.235280  580632 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:52.235426  580632 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:52.235503  580632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:52.243927  580632 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:52.244009  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:52.252390  580632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:52.265368  580632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:52.278329  580632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:52.292057  580632 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:52.296001  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.306383  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:52.386821  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:52.412054  580632 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:52.412083  580632 certs.go:195] generating shared ca certs ...
	I1120 21:23:52.412101  580632 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:52.412365  580632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:52.412416  580632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:52.412425  580632 certs.go:257] generating profile certs ...
	I1120 21:23:52.412506  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:52.412557  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:52.412600  580632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:52.412708  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:52.412737  580632 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:52.412744  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:52.412764  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:52.412789  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:52.412810  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:52.412858  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:52.413501  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:52.433062  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:52.455200  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:52.474785  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:52.498862  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:52.517609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:52.536537  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:52.554621  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:52.573916  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:52.592807  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:52.612609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:52.631336  580632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:52.644868  580632 ssh_runner.go:195] Run: openssl version
	I1120 21:23:52.651511  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.659232  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:52.666879  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670902  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670977  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.706119  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:52.714315  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.722433  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:52.730632  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734534  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734604  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.769094  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:52.777452  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.785374  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:52.793202  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797062  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797113  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.832297  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:52.840467  580632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:52.844760  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:23:52.879344  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:23:52.914405  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:23:52.957573  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:23:53.002001  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:23:53.059315  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:23:53.118112  580632 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:53.118249  580632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:53.118315  580632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:53.155937  580632 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:53.155964  580632 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:53.155969  580632 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:53.155973  580632 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:53.155977  580632 cri.go:89] found id: ""
	I1120 21:23:53.156028  580632 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:23:53.169575  580632 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:53.169666  580632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:53.178572  580632 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:23:53.178598  580632 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:23:53.178648  580632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:23:53.186477  580632 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:23:53.187131  580632 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-678421" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.187455  580632 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-678421" cluster setting kubeconfig missing "newest-cni-678421" context setting]
	I1120 21:23:53.188141  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.189852  580632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:23:53.197764  580632 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1120 21:23:53.197798  580632 kubeadm.go:602] duration metric: took 19.193608ms to restartPrimaryControlPlane
	I1120 21:23:53.197808  580632 kubeadm.go:403] duration metric: took 79.708097ms to StartCluster
	I1120 21:23:53.197825  580632 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.197892  580632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.199030  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.199301  580632 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:53.199413  580632 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:53.199502  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:53.199513  580632 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:53.199538  580632 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	W1120 21:23:53.199549  580632 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:23:53.199556  580632 addons.go:70] Setting dashboard=true in profile "newest-cni-678421"
	I1120 21:23:53.199565  580632 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:53.199573  580632 addons.go:239] Setting addon dashboard=true in "newest-cni-678421"
	I1120 21:23:53.199579  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	W1120 21:23:53.199581  580632 addons.go:248] addon dashboard should already be in state true
	I1120 21:23:53.199581  580632 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:53.199609  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.199914  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200071  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200090  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.201838  580632 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:53.203715  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:53.227682  580632 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	W1120 21:23:53.227708  580632 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:23:53.227739  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.228202  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.228303  580632 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:53.228953  580632 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 21:23:53.229725  580632 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.229745  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:53.229800  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.231712  580632 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 21:23:53.232850  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 21:23:53.232872  580632 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 21:23:53.232947  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.265766  580632 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.265850  580632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:53.265935  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.266314  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.268641  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.294295  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.343706  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:53.357884  580632 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:53.357961  580632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:53.370517  580632 api_server.go:72] duration metric: took 171.180002ms to wait for apiserver process to appear ...
	I1120 21:23:53.370547  580632 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:53.370574  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:53.384995  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 21:23:53.385021  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 21:23:53.387564  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.400161  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 21:23:53.400191  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 21:23:53.410438  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.417003  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 21:23:53.417034  580632 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 21:23:53.431937  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 21:23:53.431967  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 21:23:53.449462  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 21:23:53.449491  580632 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 21:23:53.468486  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 21:23:53.468515  580632 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 21:23:53.485129  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 21:23:53.485160  580632 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 21:23:53.498138  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 21:23:53.498163  580632 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 21:23:53.513999  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:53.514025  580632 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 21:23:53.528983  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:54.509181  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.509231  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.509249  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.515250  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.515284  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.871126  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.876266  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:54.876293  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.038413  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650810705s)
	I1120 21:23:55.038453  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.627983684s)
	I1120 21:23:55.038566  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.509543165s)
	I1120 21:23:55.040585  580632 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-678421 addons enable metrics-server
	
	I1120 21:23:55.050757  580632 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 21:23:55.052024  580632 addons.go:515] duration metric: took 1.852618686s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:55.370859  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.375402  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:55.375429  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.871078  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.875821  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:55.876995  580632 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:55.877022  580632 api_server.go:131] duration metric: took 2.506467275s to wait for apiserver health ...
	I1120 21:23:55.877035  580632 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:55.881011  580632 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:55.881052  580632 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881064  580632 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:55.881076  580632 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:55.881086  580632 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:55.881102  580632 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:55.881111  580632 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:55.881120  580632 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:55.881127  580632 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881138  580632 system_pods.go:74] duration metric: took 4.09635ms to wait for pod list to return data ...
	I1120 21:23:55.881153  580632 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:55.883836  580632 default_sa.go:45] found service account: "default"
	I1120 21:23:55.883863  580632 default_sa.go:55] duration metric: took 2.701397ms for default service account to be created ...
	I1120 21:23:55.883875  580632 kubeadm.go:587] duration metric: took 2.684545859s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:55.883891  580632 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:55.886610  580632 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:55.886636  580632 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:55.886650  580632 node_conditions.go:105] duration metric: took 2.75414ms to run NodePressure ...
	I1120 21:23:55.886662  580632 start.go:242] waiting for startup goroutines ...
	I1120 21:23:55.886668  580632 start.go:247] waiting for cluster config update ...
	I1120 21:23:55.886679  580632 start.go:256] writing updated cluster config ...
	I1120 21:23:55.886967  580632 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:55.937055  580632 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:55.938658  580632 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:23:15 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:15.820230555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 20 21:23:15 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:15.824587719Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 20 21:23:15 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:15.824619956Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.59969419Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=85a8f4e4-0936-45b5-8a31-b9e3e63051ea name=/runtime.v1.ImageService/PullImage
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.600360469Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b6201472-d02c-48f0-8c6b-02bd1af63bf9 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.602027468Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7e2e7198-9148-4a66-855d-00566e5d14b5 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.607131811Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr/kubernetes-dashboard" id=01391e26-084d-4a49-9928-3980dcb9a3f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.607309968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.61239745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.612591434Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3a1e6d6cfbf6bede239bc25f0beba606bb85c7f5b19d21c87e1563235a4c0094/merged/etc/group: no such file or directory"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.612927305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.649575939Z" level=info msg="Created container 0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr/kubernetes-dashboard" id=01391e26-084d-4a49-9928-3980dcb9a3f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.650299245Z" level=info msg="Starting container: 0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b" id=3f69523a-5c5a-43ea-8d82-5f54439846e2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:16 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:16.652716328Z" level=info msg="Started container" PID=1756 containerID=0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr/kubernetes-dashboard id=3f69523a-5c5a-43ea-8d82-5f54439846e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa49cdc27d26a44b148fe3bd862daec3ea229ffd8417f4a126b694e0bcf00bb3
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.986864781Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a9cf9065-6a97-48bf-a840-d0991c54633d name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.989563691Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=493be92f-0214-463c-b6ab-9a8f242f81be name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.992687288Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper" id=2e5751b7-00cf-4e3d-b865-5849461ca8ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.992832992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.999102685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:28 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:28.999610514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.035053642Z" level=info msg="Created container 808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper" id=2e5751b7-00cf-4e3d-b865-5849461ca8ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.035754337Z" level=info msg="Starting container: 808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea" id=38369ab9-534e-479f-a386-5be0dc3cd8f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.038042686Z" level=info msg="Started container" PID=1778 containerID=808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper id=38369ab9-534e-479f-a386-5be0dc3cd8f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dec3df1c2092d4318628aa05e610515dbe1f1d6000c71bf0e617ed1d4497f7f3
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.111752232Z" level=info msg="Removing container: aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a" id=309f4846-325a-449b-b513-e75159b90282 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 20 21:23:29 default-k8s-diff-port-454524 crio[567]: time="2025-11-20T21:23:29.122576973Z" level=info msg="Removed container aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff/dashboard-metrics-scraper" id=309f4846-325a-449b-b513-e75159b90282 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	808bc7edcf5af       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   dec3df1c2092d       dashboard-metrics-scraper-6ffb444bf9-pngff             kubernetes-dashboard
	0306081c17872       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   aa49cdc27d26a       kubernetes-dashboard-855c9754f9-psntr                  kubernetes-dashboard
	75a7a0ef60fec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   f708b1045bc5f       storage-provisioner                                    kube-system
	14b6e98fbad5f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   2c8fa22a310bb       busybox                                                default
	7961048e0f06b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   4ad7cf83e43c5       coredns-66bc5c9577-zkl9z                               kube-system
	11d9f9b8da1b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   174f4cb78b9ff       kindnet-clzlq                                          kube-system
	69bf579c533fc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   3f4b2b5975538       kube-proxy-fpnmp                                       kube-system
	d3649474106a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   f708b1045bc5f       storage-provisioner                                    kube-system
	3921f1915faef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   461ccc940f435       etcd-default-k8s-diff-port-454524                      kube-system
	5e958020a3930       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   04de9370ea6bf       kube-apiserver-default-k8s-diff-port-454524            kube-system
	32e30730c959d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   554f8b08fa997       kube-controller-manager-default-k8s-diff-port-454524   kube-system
	5d74d09802c74       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   a2310ba020988       kube-scheduler-default-k8s-diff-port-454524            kube-system
	
	
	==> coredns [7961048e0f06ba18db0fd4b69d46b8e5d7b30eeced91249265cb951ea3ac0b34] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51390 - 58663 "HINFO IN 1395662525658519895.2157227740901871941. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060849126s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-454524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-454524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=default-k8s-diff-port-454524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-454524
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Nov 2025 21:23:55 +0000   Thu, 20 Nov 2025 21:22:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-454524
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                5a173afd-5240-460c-a507-61495be2fab4
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-zkl9z                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-454524                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-clzlq                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-454524             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-454524    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-fpnmp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-454524             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pngff              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-psntr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-454524 event: Registered Node default-k8s-diff-port-454524 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-454524 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-454524 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-454524 event: Registered Node default-k8s-diff-port-454524 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [3921f1915faef9c9893b50dc9abdbf0b0ffb04a39807d004316cbe5d73fe1e48] <==
	{"level":"warn","ts":"2025-11-20T21:23:03.865528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.872328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.878927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.888751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.895793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.902959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.909188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.915590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.923387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.929397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.936277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.951441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.957870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.965285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.972817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.980883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.989095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:03.997404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.008526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.016292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.022715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.029328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.044823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.051247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:04.109495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41162","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:24:01 up  4:06,  0 user,  load average: 3.42, 4.39, 2.94
	Linux default-k8s-diff-port-454524 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11d9f9b8da1b13f9483c064d37db32796bf02190d94b45044437a213b52b737e] <==
	podIP = 192.168.85.2
	I1120 21:23:05.472981       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:23:05.472999       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:23:05.473020       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:23:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:23:05.771809       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:23:05.772008       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:23:05.772050       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:23:05.772313       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1120 21:23:05.772796       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1120 21:23:05.772802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1120 21:23:05.868206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1120 21:23:07.272336       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:23:07.272375       1 metrics.go:72] Registering metrics
	I1120 21:23:07.272466       1 controller.go:711] "Syncing nftables rules"
	I1120 21:23:15.771717       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:15.771784       1 main.go:301] handling current node
	I1120 21:23:25.778300       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:25.778333       1 main.go:301] handling current node
	I1120 21:23:35.771979       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:35.772009       1 main.go:301] handling current node
	I1120 21:23:45.773242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:45.773286       1 main.go:301] handling current node
	I1120 21:23:55.772395       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1120 21:23:55.772440       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e958020a3930a138f25a5b87c0ccb52a3f362bfa85766cf949afb376899d198] <==
	I1120 21:23:04.626756       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:23:04.626765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:23:04.626774       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:23:04.627425       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1120 21:23:04.628648       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1120 21:23:04.628650       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1120 21:23:04.628939       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:23:04.643356       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1120 21:23:04.650775       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 21:23:04.675654       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1120 21:23:04.684209       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1120 21:23:04.684329       1 policy_source.go:240] refreshing policies
	I1120 21:23:04.707511       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:23:04.929128       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:23:04.964123       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:23:04.985484       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:23:05.000476       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:23:05.012592       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:23:05.070592       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.76.198"}
	I1120 21:23:05.082355       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.177.9"}
	I1120 21:23:05.526489       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:23:08.129548       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:23:08.477477       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:23:08.626557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:23:08.626557       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [32e30730c959d73d9d8630bb246958dd9ab048e29f3ecc9af6cec8ea4ffc721e] <==
	I1120 21:23:08.018913       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:23:08.021151       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:23:08.023955       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:23:08.023972       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:23:08.024003       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:23:08.024024       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1120 21:23:08.024151       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1120 21:23:08.024162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1120 21:23:08.024141       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:23:08.024168       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1120 21:23:08.024183       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1120 21:23:08.024342       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-454524"
	I1120 21:23:08.024423       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1120 21:23:08.024207       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1120 21:23:08.024206       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:23:08.024819       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:23:08.025346       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:23:08.026548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:23:08.027778       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:23:08.027793       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1120 21:23:08.027799       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1120 21:23:08.030282       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:23:08.032784       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:23:08.049389       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1120 21:23:08.050372       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [69bf579c533fce3c38b538426bf2830c1ba9b8584c53e4f65be87a667ef0448c] <==
	I1120 21:23:05.409135       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:23:05.472408       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:23:05.573183       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:23:05.573231       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1120 21:23:05.573333       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:23:05.596472       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:23:05.596555       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:23:05.603731       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:23:05.604238       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:23:05.604282       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:05.606996       1 config.go:200] "Starting service config controller"
	I1120 21:23:05.607020       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:23:05.607043       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:23:05.607049       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:23:05.607065       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:23:05.607071       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:23:05.607488       1 config.go:309] "Starting node config controller"
	I1120 21:23:05.607519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:23:05.707171       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:23:05.707201       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:23:05.707179       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1120 21:23:05.707854       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5d74d09802c7420ec57c45ee42a2fca9c71a78fa136e9fce50b2eaf269d99c74] <==
	I1120 21:23:03.153741       1 serving.go:386] Generated self-signed cert in-memory
	W1120 21:23:04.598693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 21:23:04.598734       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:23:04.598748       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 21:23:04.598765       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 21:23:04.640334       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:23:04.640369       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:04.643331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:04.643442       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:04.644273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:23:04.644345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:23:04.743726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:23:08 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:08.565870     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtr4l\" (UniqueName: \"kubernetes.io/projected/8811704a-85d6-4e90-adb4-08f581d9ade6-kube-api-access-mtr4l\") pod \"dashboard-metrics-scraper-6ffb444bf9-pngff\" (UID: \"8811704a-85d6-4e90-adb4-08f581d9ade6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff"
	Nov 20 21:23:08 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:08.565947     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b6ddf9d-f15b-465a-89af-d622cce06e01-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-psntr\" (UID: \"7b6ddf9d-f15b-465a-89af-d622cce06e01\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr"
	Nov 20 21:23:13 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:13.055126     730 scope.go:117] "RemoveContainer" containerID="56587bcbccaca9f35b3d77e91bc55473963ac20900f4d92c72e1f4c5ae224758"
	Nov 20 21:23:13 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:13.105754     730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 20 21:23:14 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:14.060361     730 scope.go:117] "RemoveContainer" containerID="56587bcbccaca9f35b3d77e91bc55473963ac20900f4d92c72e1f4c5ae224758"
	Nov 20 21:23:14 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:14.060430     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:14 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:14.060588     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:15 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:15.065015     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:15 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:15.065695     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:16 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:16.068093     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:16 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:16.068341     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:17 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:17.084571     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-psntr" podStartSLOduration=1.272838529 podStartE2EDuration="9.084547587s" podCreationTimestamp="2025-11-20 21:23:08 +0000 UTC" firstStartedPulling="2025-11-20 21:23:08.789677949 +0000 UTC m=+6.914335266" lastFinishedPulling="2025-11-20 21:23:16.601386995 +0000 UTC m=+14.726044324" observedRunningTime="2025-11-20 21:23:17.084511867 +0000 UTC m=+15.209169205" watchObservedRunningTime="2025-11-20 21:23:17.084547587 +0000 UTC m=+15.209204924"
	Nov 20 21:23:28 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:28.986341     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:29 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:29.110293     730 scope.go:117] "RemoveContainer" containerID="aafe60eecb786b4f2d0da3b91c3a82228a4b63995e573d8917e66ed8790e5b1a"
	Nov 20 21:23:29 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:29.110536     730 scope.go:117] "RemoveContainer" containerID="808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	Nov 20 21:23:29 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:29.110797     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:34 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:34.204688     730 scope.go:117] "RemoveContainer" containerID="808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	Nov 20 21:23:34 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:34.204983     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:48 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:48.985680     730 scope.go:117] "RemoveContainer" containerID="808bc7edcf5af3e6fbff8cf4cdab3bcf55b8f4d030add47f1a709369858144ea"
	Nov 20 21:23:48 default-k8s-diff-port-454524 kubelet[730]: E1120 21:23:48.985870     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pngff_kubernetes-dashboard(8811704a-85d6-4e90-adb4-08f581d9ade6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pngff" podUID="8811704a-85d6-4e90-adb4-08f581d9ade6"
	Nov 20 21:23:56 default-k8s-diff-port-454524 kubelet[730]: I1120 21:23:56.935229     730 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 20 21:23:56 default-k8s-diff-port-454524 systemd[1]: kubelet.service: Consumed 1.897s CPU time.
	
	
	==> kubernetes-dashboard [0306081c1787203809707d835c4e62558db3110a7d9ea6c87d9f3d295a60e14b] <==
	2025/11/20 21:23:16 Starting overwatch
	2025/11/20 21:23:16 Using namespace: kubernetes-dashboard
	2025/11/20 21:23:16 Using in-cluster config to connect to apiserver
	2025/11/20 21:23:16 Using secret token for csrf signing
	2025/11/20 21:23:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/20 21:23:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/20 21:23:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/20 21:23:16 Generating JWE encryption key
	2025/11/20 21:23:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/20 21:23:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/20 21:23:16 Initializing JWE encryption key from synchronized object
	2025/11/20 21:23:16 Creating in-cluster Sidecar client
	2025/11/20 21:23:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/20 21:23:16 Serving insecurely on HTTP port: 9090
	2025/11/20 21:23:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [75a7a0ef60fecf1571bf9f7857404111211f091c153d30f820fe0aea9f50fb6c] <==
	W1120 21:23:37.596566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.599375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:39.604758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.607635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:41.611976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:43.614743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:43.619684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:45.623381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:45.627375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:47.630587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:47.634305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:49.637183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:49.643172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:51.646321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:51.650682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:53.654364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:53.658476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:55.661707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:55.666355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:57.670504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:57.674187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:59.677612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:23:59.681987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:24:01.685422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1120 21:24:01.691887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d3649474106a31c1b6ed18da94fdcf513679c355ee4944ce2226a39eb9456679] <==
	I1120 21:23:05.372787       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1120 21:23:05.374361       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524: exit status 2 (372.906446ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-678421 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-678421 --alsologtostderr -v=1: exit status 80 (2.343359309s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-678421 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:23:56.662745  583015 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:56.663070  583015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:56.663081  583015 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:56.663085  583015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:56.663310  583015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:56.663585  583015 out.go:368] Setting JSON to false
	I1120 21:23:56.663638  583015 mustload.go:66] Loading cluster: newest-cni-678421
	I1120 21:23:56.664014  583015 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:56.664451  583015 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:56.685661  583015 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:56.685944  583015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:56.745310  583015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:56.735154867 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:56.745999  583015 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-678421 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1120 21:23:56.747754  583015 out.go:179] * Pausing node newest-cni-678421 ... 
	I1120 21:23:56.748748  583015 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:56.749124  583015 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:56.749170  583015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:56.769948  583015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:56.872648  583015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:56.885162  583015 pause.go:52] kubelet running: true
	I1120 21:23:56.885267  583015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:57.040797  583015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:57.040881  583015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:57.110794  583015 cri.go:89] found id: "65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35"
	I1120 21:23:57.110818  583015 cri.go:89] found id: "3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d"
	I1120 21:23:57.110823  583015 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:57.110826  583015 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:57.110828  583015 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:57.110831  583015 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:57.110833  583015 cri.go:89] found id: ""
	I1120 21:23:57.110871  583015 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:57.123288  583015 retry.go:31] will retry after 196.618847ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:57Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:57.320728  583015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:57.334649  583015 pause.go:52] kubelet running: false
	I1120 21:23:57.334717  583015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:57.466334  583015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:57.466447  583015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:57.537426  583015 cri.go:89] found id: "65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35"
	I1120 21:23:57.537449  583015 cri.go:89] found id: "3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d"
	I1120 21:23:57.537453  583015 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:57.537456  583015 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:57.537458  583015 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:57.537461  583015 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:57.537464  583015 cri.go:89] found id: ""
	I1120 21:23:57.537508  583015 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:57.550238  583015 retry.go:31] will retry after 510.782049ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:57Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:58.062042  583015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:58.075840  583015 pause.go:52] kubelet running: false
	I1120 21:23:58.075906  583015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:58.204307  583015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:58.204423  583015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:58.278711  583015 cri.go:89] found id: "65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35"
	I1120 21:23:58.278743  583015 cri.go:89] found id: "3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d"
	I1120 21:23:58.278750  583015 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:58.278756  583015 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:58.278761  583015 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:58.278767  583015 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:58.278772  583015 cri.go:89] found id: ""
	I1120 21:23:58.278844  583015 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:58.293010  583015 retry.go:31] will retry after 398.681878ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:58Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:58.692427  583015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:23:58.707940  583015 pause.go:52] kubelet running: false
	I1120 21:23:58.708000  583015 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1120 21:23:58.825994  583015 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1120 21:23:58.826088  583015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1120 21:23:58.906376  583015 cri.go:89] found id: "65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35"
	I1120 21:23:58.906403  583015 cri.go:89] found id: "3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d"
	I1120 21:23:58.906415  583015 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:58.906420  583015 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:58.906423  583015 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:58.906428  583015 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:58.906432  583015 cri.go:89] found id: ""
	I1120 21:23:58.906481  583015 ssh_runner.go:195] Run: sudo runc list -f json
	I1120 21:23:58.921321  583015 out.go:203] 
	W1120 21:23:58.923581  583015 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1120 21:23:58.923624  583015 out.go:285] * 
	* 
	W1120 21:23:58.928902  583015 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1120 21:23:58.930583  583015 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-678421 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-678421
helpers_test.go:243: (dbg) docker inspect newest-cni-678421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14",
	        "Created": "2025-11-20T21:23:11.873210251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 580830,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:23:46.302894933Z",
	            "FinishedAt": "2025-11-20T21:23:45.401763914Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/hostname",
	        "HostsPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/hosts",
	        "LogPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14-json.log",
	        "Name": "/newest-cni-678421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-678421:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-678421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14",
	                "LowerDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-678421",
	                "Source": "/var/lib/docker/volumes/newest-cni-678421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-678421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-678421",
	                "name.minikube.sigs.k8s.io": "newest-cni-678421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "83ad46916fecd3d983ad011b63b46fd65f64db57b75c26977a33d64475371653",
	            "SandboxKey": "/var/run/docker/netns/83ad46916fec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-678421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "093eb702f51d45a3830e07e67ca1106b8fab033ac409a63fdd5ab62c257a2c9e",
	                    "EndpointID": "755e66db3bdd75c2851195eefb5e6397b531b35aa52f6309e042fa93911eaf88",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "76:f1:5f:fe:8e:cf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-678421",
	                        "e821ad74a972"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421: exit status 2 (357.279316ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-678421 logs -n 25
E1120 21:23:59.806151  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/kindnet-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-678421 logs -n 25: (1.058901585s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ stop    │ -p newest-cni-678421 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ embed-certs-714571 image list --format=json                                                                                                                                                                                                   │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p embed-certs-714571 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p newest-cni-678421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-454524 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ newest-cni-678421 image list --format=json                                                                                                                                                                                                    │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-454524 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ pause   │ -p newest-cni-678421 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:46.048493  580632 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:46.048745  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048754  580632 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:46.048757  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048979  580632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:46.049453  580632 out.go:368] Setting JSON to false
	I1120 21:23:46.050631  580632 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14768,"bootTime":1763659058,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:46.050732  580632 start.go:143] virtualization: kvm guest
	I1120 21:23:46.052729  580632 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:46.054001  580632 notify.go:221] Checking for updates...
	I1120 21:23:46.054037  580632 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:46.055262  580632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:46.056552  580632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:46.057861  580632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:46.058912  580632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:46.060118  580632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:46.061652  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:46.062194  580632 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:46.086682  580632 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:46.086778  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.150294  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.138396042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.150451  580632 docker.go:319] overlay module found
	I1120 21:23:46.152137  580632 out.go:179] * Using the docker driver based on existing profile
	I1120 21:23:46.153355  580632 start.go:309] selected driver: docker
	I1120 21:23:46.153372  580632 start.go:930] validating driver "docker" against &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.153484  580632 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:46.154208  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.217838  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.20805981 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.218634  580632 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:46.218693  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:46.218746  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:46.218816  580632 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.221718  580632 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:46.223273  580632 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:46.224323  580632 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:46.225587  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:46.225618  580632 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:46.225634  580632 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:46.225700  580632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:46.225713  580632 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:46.225745  580632 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:46.225840  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.250485  580632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:46.250505  580632 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:46.250520  580632 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:46.250545  580632 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:46.250600  580632 start.go:364] duration metric: took 36.944µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:46.250616  580632 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:23:46.250624  580632 fix.go:54] fixHost starting: 
	I1120 21:23:46.250818  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.271749  580632 fix.go:112] recreateIfNeeded on newest-cni-678421: state=Stopped err=<nil>
	W1120 21:23:46.271804  580632 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:23:46.273510  580632 out.go:252] * Restarting existing docker container for "newest-cni-678421" ...
	I1120 21:23:46.273588  580632 cli_runner.go:164] Run: docker start newest-cni-678421
	I1120 21:23:46.640583  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.661704  580632 kic.go:430] container "newest-cni-678421" state is running.
	I1120 21:23:46.662149  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:46.686053  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.686348  580632 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:46.686428  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:46.706344  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:46.706819  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:46.706846  580632 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:46.707727  580632 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56362->127.0.0.1:33133: read: connection reset by peer
	I1120 21:23:49.843845  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:49.843888  580632 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:49.843955  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:49.863206  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:49.863522  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:49.863542  580632 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:50.004857  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:50.004940  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.023923  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.024145  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.024162  580632 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:50.156143  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:50.156182  580632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:50.156257  580632 ubuntu.go:190] setting up certificates
	I1120 21:23:50.156270  580632 provision.go:84] configureAuth start
	I1120 21:23:50.156339  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.176257  580632 provision.go:143] copyHostCerts
	I1120 21:23:50.176333  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:50.176355  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:50.176432  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:50.176553  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:50.176566  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:50.176606  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:50.176690  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:50.176700  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:50.176737  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:50.176809  580632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:50.229409  580632 provision.go:177] copyRemoteCerts
	I1120 21:23:50.229481  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:50.229536  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.248655  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.344789  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:50.363151  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:50.381153  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:50.399055  580632 provision.go:87] duration metric: took 242.768844ms to configureAuth
	I1120 21:23:50.399082  580632 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:50.399272  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:50.399375  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.418619  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.418835  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.418850  580632 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:50.711816  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:50.711848  580632 machine.go:97] duration metric: took 4.025481618s to provisionDockerMachine
	I1120 21:23:50.711864  580632 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:50.711878  580632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:50.711941  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:50.711982  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.732036  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.829787  580632 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:50.833560  580632 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:50.833616  580632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:50.833627  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:50.833705  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:50.833835  580632 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:50.833980  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:50.842564  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:50.861253  580632 start.go:296] duration metric: took 149.369694ms for postStartSetup
	I1120 21:23:50.861339  580632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:50.861377  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.880051  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.974208  580632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:50.979126  580632 fix.go:56] duration metric: took 4.728491713s for fixHost
	I1120 21:23:50.979158  580632 start.go:83] releasing machines lock for "newest-cni-678421", held for 4.728546595s
	I1120 21:23:50.979256  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.998093  580632 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:50.998117  580632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:50.998142  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.998179  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:51.019563  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.019937  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.111789  580632 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:51.174631  580632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:51.210913  580632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:51.216140  580632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:51.216212  580632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:51.225258  580632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:23:51.225285  580632 start.go:496] detecting cgroup driver to use...
	I1120 21:23:51.225322  580632 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:51.225373  580632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:51.239684  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:51.252817  580632 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:51.252873  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:51.267677  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:51.280313  580632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:51.359820  580632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:51.440243  580632 docker.go:234] disabling docker service ...
	I1120 21:23:51.440315  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:51.455600  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:51.468814  580632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:51.549991  580632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:51.639411  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:51.653330  580632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:51.668426  580632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:51.668496  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.678387  580632 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:51.678448  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.687514  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.696617  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.705907  580632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:51.714416  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.724299  580632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.733643  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.743143  580632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:51.751288  580632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:51.758956  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:51.839688  580632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:51.991719  580632 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:51.991791  580632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:51.995963  580632 start.go:564] Will wait 60s for crictl version
	I1120 21:23:51.996011  580632 ssh_runner.go:195] Run: which crictl
	I1120 21:23:51.999596  580632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:52.025769  580632 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:52.025844  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.055148  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.087414  580632 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:52.088512  580632 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:52.106859  580632 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:52.111317  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.124544  580632 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 21:23:52.125757  580632 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:52.125892  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:52.125953  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.159731  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.159752  580632 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:52.159798  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.187161  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.187185  580632 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:52.187193  580632 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:52.187306  580632 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:52.187376  580632 ssh_runner.go:195] Run: crio config
	I1120 21:23:52.235170  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:52.235200  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:52.235246  580632 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:52.235280  580632 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:52.235426  580632 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:52.235503  580632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:52.243927  580632 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:52.244009  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:52.252390  580632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:52.265368  580632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:52.278329  580632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:52.292057  580632 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:52.296001  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.306383  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:52.386821  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:52.412054  580632 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:52.412083  580632 certs.go:195] generating shared ca certs ...
	I1120 21:23:52.412101  580632 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:52.412365  580632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:52.412416  580632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:52.412425  580632 certs.go:257] generating profile certs ...
	I1120 21:23:52.412506  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:52.412557  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:52.412600  580632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:52.412708  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:52.412737  580632 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:52.412744  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:52.412764  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:52.412789  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:52.412810  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:52.412858  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:52.413501  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:52.433062  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:52.455200  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:52.474785  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:52.498862  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:52.517609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:52.536537  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:52.554621  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:52.573916  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:52.592807  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:52.612609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:52.631336  580632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:52.644868  580632 ssh_runner.go:195] Run: openssl version
	I1120 21:23:52.651511  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.659232  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:52.666879  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670902  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670977  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.706119  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:52.714315  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.722433  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:52.730632  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734534  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734604  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.769094  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:52.777452  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.785374  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:52.793202  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797062  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797113  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.832297  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:52.840467  580632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:52.844760  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:23:52.879344  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:23:52.914405  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:23:52.957573  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:23:53.002001  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:23:53.059315  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:23:53.118112  580632 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:53.118249  580632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:53.118315  580632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:53.155937  580632 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:53.155964  580632 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:53.155969  580632 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:53.155973  580632 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:53.155977  580632 cri.go:89] found id: ""
	I1120 21:23:53.156028  580632 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:23:53.169575  580632 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:53.169666  580632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:53.178572  580632 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:23:53.178598  580632 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:23:53.178648  580632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:23:53.186477  580632 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:23:53.187131  580632 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-678421" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.187455  580632 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-678421" cluster setting kubeconfig missing "newest-cni-678421" context setting]
	I1120 21:23:53.188141  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.189852  580632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:23:53.197764  580632 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1120 21:23:53.197798  580632 kubeadm.go:602] duration metric: took 19.193608ms to restartPrimaryControlPlane
	I1120 21:23:53.197808  580632 kubeadm.go:403] duration metric: took 79.708097ms to StartCluster
	I1120 21:23:53.197825  580632 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.197892  580632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.199030  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.199301  580632 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:53.199413  580632 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:53.199502  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:53.199513  580632 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:53.199538  580632 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	W1120 21:23:53.199549  580632 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:23:53.199556  580632 addons.go:70] Setting dashboard=true in profile "newest-cni-678421"
	I1120 21:23:53.199565  580632 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:53.199573  580632 addons.go:239] Setting addon dashboard=true in "newest-cni-678421"
	I1120 21:23:53.199579  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	W1120 21:23:53.199581  580632 addons.go:248] addon dashboard should already be in state true
	I1120 21:23:53.199581  580632 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:53.199609  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.199914  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200071  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200090  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.201838  580632 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:53.203715  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:53.227682  580632 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	W1120 21:23:53.227708  580632 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:23:53.227739  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.228202  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.228303  580632 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:53.228953  580632 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 21:23:53.229725  580632 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.229745  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:53.229800  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.231712  580632 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 21:23:53.232850  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 21:23:53.232872  580632 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 21:23:53.232947  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.265766  580632 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.265850  580632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:53.265935  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.266314  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.268641  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.294295  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.343706  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:53.357884  580632 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:53.357961  580632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:53.370517  580632 api_server.go:72] duration metric: took 171.180002ms to wait for apiserver process to appear ...
	I1120 21:23:53.370547  580632 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:53.370574  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:53.384995  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 21:23:53.385021  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 21:23:53.387564  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.400161  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 21:23:53.400191  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 21:23:53.410438  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.417003  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 21:23:53.417034  580632 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 21:23:53.431937  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 21:23:53.431967  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 21:23:53.449462  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 21:23:53.449491  580632 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 21:23:53.468486  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 21:23:53.468515  580632 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 21:23:53.485129  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 21:23:53.485160  580632 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 21:23:53.498138  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 21:23:53.498163  580632 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 21:23:53.513999  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:53.514025  580632 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 21:23:53.528983  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:54.509181  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.509231  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.509249  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.515250  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.515284  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.871126  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.876266  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:54.876293  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.038413  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650810705s)
	I1120 21:23:55.038453  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.627983684s)
	I1120 21:23:55.038566  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.509543165s)
	I1120 21:23:55.040585  580632 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-678421 addons enable metrics-server
	
	I1120 21:23:55.050757  580632 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 21:23:55.052024  580632 addons.go:515] duration metric: took 1.852618686s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:55.370859  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.375402  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:55.375429  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.871078  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.875821  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:55.876995  580632 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:55.877022  580632 api_server.go:131] duration metric: took 2.506467275s to wait for apiserver health ...
	I1120 21:23:55.877035  580632 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:55.881011  580632 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:55.881052  580632 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881064  580632 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:55.881076  580632 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:55.881086  580632 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:55.881102  580632 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:55.881111  580632 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:55.881120  580632 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:55.881127  580632 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881138  580632 system_pods.go:74] duration metric: took 4.09635ms to wait for pod list to return data ...
	I1120 21:23:55.881153  580632 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:55.883836  580632 default_sa.go:45] found service account: "default"
	I1120 21:23:55.883863  580632 default_sa.go:55] duration metric: took 2.701397ms for default service account to be created ...
	I1120 21:23:55.883875  580632 kubeadm.go:587] duration metric: took 2.684545859s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:55.883891  580632 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:55.886610  580632 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:55.886636  580632 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:55.886650  580632 node_conditions.go:105] duration metric: took 2.75414ms to run NodePressure ...
	I1120 21:23:55.886662  580632 start.go:242] waiting for startup goroutines ...
	I1120 21:23:55.886668  580632 start.go:247] waiting for cluster config update ...
	I1120 21:23:55.886679  580632 start.go:256] writing updated cluster config ...
	I1120 21:23:55.886967  580632 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:55.937055  580632 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:55.938658  580632 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.787376973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.790850713Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=56fc0576-84fd-490d-8978-3f98a9850d89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.792313053Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=998a4c29-3959-444f-a419-179d5b797259 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.792886133Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.79374391Z" level=info msg="Ran pod sandbox bd649de1461b51dd7032d2bbc181af80adfdecaf5205095bb7a861c187ea7c56 with infra container: kube-system/kindnet-454t9/POD" id=56fc0576-84fd-490d-8978-3f98a9850d89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.793788569Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.794693141Z" level=info msg="Ran pod sandbox c9bd7e872776429ae0d70e0cfe4f324fe0fff756b7598098068f1037b36da853 with infra container: kube-system/kube-proxy-t5jmf/POD" id=998a4c29-3959-444f-a419-179d5b797259 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.795021092Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5fb7d238-0b23-4626-a318-fde5774053cb name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.795670009Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2ef3fc6f-ec71-46e3-8c65-0d5cdbba0538 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.796132948Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8d3432fd-2d3f-4dcb-a5d7-34370db40bfe name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.796580543Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=920015e9-c661-42ca-9aa5-63ea2f976d67 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797180235Z" level=info msg="Creating container: kube-system/kindnet-454t9/kindnet-cni" id=6dd0408c-89bc-4c5c-a106-5954e065c3e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797295059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797524861Z" level=info msg="Creating container: kube-system/kube-proxy-t5jmf/kube-proxy" id=49a98c38-4add-40c3-8fd7-bb152a2ca3ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797799321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.802485506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.803080492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.805245685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.805955407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.832508048Z" level=info msg="Created container 3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d: kube-system/kindnet-454t9/kindnet-cni" id=6dd0408c-89bc-4c5c-a106-5954e065c3e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.833187647Z" level=info msg="Starting container: 3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d" id=83c2a7cc-7ea0-426f-8f5f-e4ce34150fc3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.834884204Z" level=info msg="Started container" PID=1050 containerID=3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d description=kube-system/kindnet-454t9/kindnet-cni id=83c2a7cc-7ea0-426f-8f5f-e4ce34150fc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd649de1461b51dd7032d2bbc181af80adfdecaf5205095bb7a861c187ea7c56
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.835492494Z" level=info msg="Created container 65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35: kube-system/kube-proxy-t5jmf/kube-proxy" id=49a98c38-4add-40c3-8fd7-bb152a2ca3ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.83607006Z" level=info msg="Starting container: 65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35" id=5a765aef-0bd3-4125-88a5-a0ccda03640c name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.83858167Z" level=info msg="Started container" PID=1051 containerID=65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35 description=kube-system/kube-proxy-t5jmf/kube-proxy id=5a765aef-0bd3-4125-88a5-a0ccda03640c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9bd7e872776429ae0d70e0cfe4f324fe0fff756b7598098068f1037b36da853
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	65abef9f5a4ad       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   c9bd7e8727764       kube-proxy-t5jmf                            kube-system
	3a3122c4d5a98       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   bd649de1461b5       kindnet-454t9                               kube-system
	e94369746f3f1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   63fb7a975a70b       kube-scheduler-newest-cni-678421            kube-system
	844f21c4918c5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   73dd51434acb6       kube-controller-manager-newest-cni-678421   kube-system
	8624913048b6d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   7154c31cbe523       etcd-newest-cni-678421                      kube-system
	37acb2d75f157       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   7fd6ecc2dd973       kube-apiserver-newest-cni-678421            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-678421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-678421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=newest-cni-678421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_23_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:23:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-678421
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-678421
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                aea61b33-8516-4da2-aaf9-1fdf3bc040c2
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-678421                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-454t9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-678421             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-678421    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-t5jmf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-678421             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node newest-cni-678421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node newest-cni-678421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node newest-cni-678421 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node newest-cni-678421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-678421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node newest-cni-678421 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node newest-cni-678421 event: Registered Node newest-cni-678421 in Controller
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-678421 event: Registered Node newest-cni-678421 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d] <==
	{"level":"warn","ts":"2025-11-20T21:23:53.875052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.881515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.890116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.896551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.902778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.909108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.915443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.921969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.929252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.936153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.951452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.957499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.963936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.971109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.977230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.983817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.990931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.997361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.004075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.010051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.020463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.026510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.044998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.051096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.057287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:24:00 up  4:06,  0 user,  load average: 3.42, 4.39, 2.94
	Linux newest-cni-678421 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d] <==
	I1120 21:23:56.126667       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:23:56.126984       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1120 21:23:56.127175       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:23:56.127197       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:23:56.127237       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:23:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:23:56.330354       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:23:56.330414       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:23:56.330431       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:23:56.330596       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:23:56.830903       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:23:56.830932       1 metrics.go:72] Registering metrics
	I1120 21:23:56.830987       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc] <==
	I1120 21:23:54.586656       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 21:23:54.586916       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:23:54.586808       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:23:54.586821       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1120 21:23:54.592871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 21:23:54.594729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:23:54.603000       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 21:23:54.603989       1 aggregator.go:171] initial CRD sync complete...
	I1120 21:23:54.604011       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:23:54.604019       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:23:54.604026       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:23:54.628858       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:23:54.629299       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:23:54.841905       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:23:54.871177       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:23:54.892025       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:23:54.899721       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:23:54.906242       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:23:54.943395       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.196.53"}
	I1120 21:23:54.953311       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.131.234"}
	I1120 21:23:55.488515       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:23:58.124116       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:23:58.324396       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:23:58.474302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:23:58.525846       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f] <==
	I1120 21:23:57.902665       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:23:57.908950       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:23:57.915245       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:23:57.920340       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:23:57.920398       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:23:57.920409       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:23:57.920463       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:23:57.920487       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:23:57.920495       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:23:57.920511       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:23:57.920565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:23:57.921077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:23:57.921915       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:23:57.921975       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:23:57.922133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:23:57.924273       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:23:57.925399       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:23:57.927581       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:23:57.930875       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:23:57.931001       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:23:57.931101       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-678421"
	I1120 21:23:57.931203       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 21:23:57.933257       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:23:57.937333       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:23:57.945445       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35] <==
	I1120 21:23:55.873341       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:23:55.942194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:23:56.043029       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:23:56.043064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1120 21:23:56.043169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:23:56.063501       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:23:56.063575       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:23:56.069169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:23:56.069667       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:23:56.069712       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:56.071474       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:23:56.071519       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:23:56.071585       1 config.go:200] "Starting service config controller"
	I1120 21:23:56.071596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:23:56.071624       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:23:56.071629       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:23:56.071648       1 config.go:309] "Starting node config controller"
	I1120 21:23:56.071676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:23:56.071685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:23:56.171761       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:23:56.171786       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:23:56.171813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535] <==
	I1120 21:23:53.505430       1 serving.go:386] Generated self-signed cert in-memory
	W1120 21:23:54.516798       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 21:23:54.516843       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:23:54.516854       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 21:23:54.516865       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 21:23:54.545295       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:23:54.545317       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:54.547123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:54.547170       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:54.547492       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:23:54.547564       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:23:54.647371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.593384     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-678421\" already exists" pod="kube-system/kube-scheduler-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.593436     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.600192     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-678421\" already exists" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.600419     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.608400     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-678421\" already exists" pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.608612     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.615631     674 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.615738     674 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.615777     674 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.616136     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-678421\" already exists" pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.616747     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.634788     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-678421\" already exists" pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.636165     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-678421\" already exists" pod="kube-system/kube-scheduler-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.636759     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-678421\" already exists" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.479014     674 apiserver.go:52] "Watching apiserver"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.520945     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-lib-modules\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.521037     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-xtables-lock\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.521066     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-cni-cfg\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.582979     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.622084     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b0f18f-00f6-4f9c-9554-0054d1da612b-lib-modules\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.622155     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b0f18f-00f6-4f9c-9554-0054d1da612b-xtables-lock\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:57 newest-cni-678421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:57 newest-cni-678421 kubelet[674]: I1120 21:23:57.012034     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 21:23:57 newest-cni-678421 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:57 newest-cni-678421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-678421 -n newest-cni-678421
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-678421 -n newest-cni-678421: exit status 2 (377.957004ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-678421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb: exit status 1 (68.288587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6kdrd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-cwjhg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bqvtb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-678421
helpers_test.go:243: (dbg) docker inspect newest-cni-678421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14",
	        "Created": "2025-11-20T21:23:11.873210251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 580830,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-20T21:23:46.302894933Z",
	            "FinishedAt": "2025-11-20T21:23:45.401763914Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/hostname",
	        "HostsPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/hosts",
	        "LogPath": "/var/lib/docker/containers/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14/e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14-json.log",
	        "Name": "/newest-cni-678421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-678421:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-678421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e821ad74a972eb06c3455846e2dcca09a699840a7c1cb82c09ae176295297f14",
	                "LowerDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca-init/diff:/var/lib/docker/overlay2/427c3c5ba6c977e3162e594efd88e82f6b7a3578f6e2d0229ecff0fc0fabf0fc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19025f1bef3fe5ef48a7cfe2a847a0885c837b8b8cfbf6795eb66baa81ea74ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-678421",
	                "Source": "/var/lib/docker/volumes/newest-cni-678421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-678421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-678421",
	                "name.minikube.sigs.k8s.io": "newest-cni-678421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "83ad46916fecd3d983ad011b63b46fd65f64db57b75c26977a33d64475371653",
	            "SandboxKey": "/var/run/docker/netns/83ad46916fec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-678421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "093eb702f51d45a3830e07e67ca1106b8fab033ac409a63fdd5ab62c257a2c9e",
	                    "EndpointID": "755e66db3bdd75c2851195eefb5e6397b531b35aa52f6309e042fa93911eaf88",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "76:f1:5f:fe:8e:cf",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-678421",
	                        "e821ad74a972"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421: exit status 2 (360.378064ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-678421 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-678421 logs -n 25: (1.046521615s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ start   │ -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ old-k8s-version-936214 image list --format=json                                                                                                                                                                                               │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │ 20 Nov 25 21:22 UTC │
	│ pause   │ -p old-k8s-version-936214 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:22 UTC │                     │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p old-k8s-version-936214                                                                                                                                                                                                                     │ old-k8s-version-936214       │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ no-preload-166874 image list --format=json                                                                                                                                                                                                    │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p no-preload-166874 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p no-preload-166874                                                                                                                                                                                                                          │ no-preload-166874            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-678421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ stop    │ -p newest-cni-678421 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ embed-certs-714571 image list --format=json                                                                                                                                                                                                   │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p embed-certs-714571 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ addons  │ enable dashboard -p newest-cni-678421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ start   │ -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ delete  │ -p embed-certs-714571                                                                                                                                                                                                                         │ embed-certs-714571           │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ default-k8s-diff-port-454524 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ image   │ newest-cni-678421 image list --format=json                                                                                                                                                                                                    │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │ 20 Nov 25 21:23 UTC │
	│ pause   │ -p default-k8s-diff-port-454524 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-454524 │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	│ pause   │ -p newest-cni-678421 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-678421            │ jenkins │ v1.37.0 │ 20 Nov 25 21:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 21:23:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 21:23:46.048493  580632 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:23:46.048745  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048754  580632 out.go:374] Setting ErrFile to fd 2...
	I1120 21:23:46.048757  580632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:23:46.048979  580632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:23:46.049453  580632 out.go:368] Setting JSON to false
	I1120 21:23:46.050631  580632 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14768,"bootTime":1763659058,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:23:46.050732  580632 start.go:143] virtualization: kvm guest
	I1120 21:23:46.052729  580632 out.go:179] * [newest-cni-678421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:23:46.054001  580632 notify.go:221] Checking for updates...
	I1120 21:23:46.054037  580632 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:23:46.055262  580632 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:23:46.056552  580632 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:46.057861  580632 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:23:46.058912  580632 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:23:46.060118  580632 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:23:46.061652  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:46.062194  580632 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:23:46.086682  580632 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:23:46.086778  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.150294  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.138396042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.150451  580632 docker.go:319] overlay module found
	I1120 21:23:46.152137  580632 out.go:179] * Using the docker driver based on existing profile
	I1120 21:23:46.153355  580632 start.go:309] selected driver: docker
	I1120 21:23:46.153372  580632 start.go:930] validating driver "docker" against &{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.153484  580632 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:23:46.154208  580632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:23:46.217838  580632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:23:46.20805981 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:23:46.218634  580632 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:46.218693  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:46.218746  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:46.218816  580632 start.go:353] cluster config:
	{Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:46.221718  580632 out.go:179] * Starting "newest-cni-678421" primary control-plane node in "newest-cni-678421" cluster
	I1120 21:23:46.223273  580632 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 21:23:46.224323  580632 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1120 21:23:46.225587  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:46.225618  580632 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 21:23:46.225634  580632 cache.go:65] Caching tarball of preloaded images
	I1120 21:23:46.225700  580632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 21:23:46.225713  580632 preload.go:238] Found /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1120 21:23:46.225745  580632 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1120 21:23:46.225840  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.250485  580632 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1120 21:23:46.250505  580632 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1120 21:23:46.250520  580632 cache.go:243] Successfully downloaded all kic artifacts
	I1120 21:23:46.250545  580632 start.go:360] acquireMachinesLock for newest-cni-678421: {Name:mkb568d1e9e13ccda4c82b747fa368691106552e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1120 21:23:46.250600  580632 start.go:364] duration metric: took 36.944µs to acquireMachinesLock for "newest-cni-678421"
	I1120 21:23:46.250616  580632 start.go:96] Skipping create...Using existing machine configuration
	I1120 21:23:46.250624  580632 fix.go:54] fixHost starting: 
	I1120 21:23:46.250818  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.271749  580632 fix.go:112] recreateIfNeeded on newest-cni-678421: state=Stopped err=<nil>
	W1120 21:23:46.271804  580632 fix.go:138] unexpected machine state, will restart: <nil>
	I1120 21:23:46.273510  580632 out.go:252] * Restarting existing docker container for "newest-cni-678421" ...
	I1120 21:23:46.273588  580632 cli_runner.go:164] Run: docker start newest-cni-678421
	I1120 21:23:46.640583  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:46.661704  580632 kic.go:430] container "newest-cni-678421" state is running.
	I1120 21:23:46.662149  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:46.686053  580632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/config.json ...
	I1120 21:23:46.686348  580632 machine.go:94] provisionDockerMachine start ...
	I1120 21:23:46.686428  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:46.706344  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:46.706819  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:46.706846  580632 main.go:143] libmachine: About to run SSH command:
	hostname
	I1120 21:23:46.707727  580632 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56362->127.0.0.1:33133: read: connection reset by peer
	I1120 21:23:49.843845  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:49.843888  580632 ubuntu.go:182] provisioning hostname "newest-cni-678421"
	I1120 21:23:49.843955  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:49.863206  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:49.863522  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:49.863542  580632 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-678421 && echo "newest-cni-678421" | sudo tee /etc/hostname
	I1120 21:23:50.004857  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-678421
	
	I1120 21:23:50.004940  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.023923  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.024145  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.024162  580632 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-678421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-678421/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-678421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1120 21:23:50.156143  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1120 21:23:50.156182  580632 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21923-250580/.minikube CaCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21923-250580/.minikube}
	I1120 21:23:50.156257  580632 ubuntu.go:190] setting up certificates
	I1120 21:23:50.156270  580632 provision.go:84] configureAuth start
	I1120 21:23:50.156339  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.176257  580632 provision.go:143] copyHostCerts
	I1120 21:23:50.176333  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem, removing ...
	I1120 21:23:50.176355  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem
	I1120 21:23:50.176432  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/key.pem (1675 bytes)
	I1120 21:23:50.176553  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem, removing ...
	I1120 21:23:50.176566  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem
	I1120 21:23:50.176606  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/ca.pem (1082 bytes)
	I1120 21:23:50.176690  580632 exec_runner.go:144] found /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem, removing ...
	I1120 21:23:50.176700  580632 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem
	I1120 21:23:50.176737  580632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21923-250580/.minikube/cert.pem (1123 bytes)
	I1120 21:23:50.176809  580632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem org=jenkins.newest-cni-678421 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-678421]
	I1120 21:23:50.229409  580632 provision.go:177] copyRemoteCerts
	I1120 21:23:50.229481  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1120 21:23:50.229536  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.248655  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.344789  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1120 21:23:50.363151  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1120 21:23:50.381153  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1120 21:23:50.399055  580632 provision.go:87] duration metric: took 242.768844ms to configureAuth
	I1120 21:23:50.399082  580632 ubuntu.go:206] setting minikube options for container-runtime
	I1120 21:23:50.399272  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:50.399375  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.418619  580632 main.go:143] libmachine: Using SSH client type: native
	I1120 21:23:50.418835  580632 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1120 21:23:50.418850  580632 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1120 21:23:50.711816  580632 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1120 21:23:50.711848  580632 machine.go:97] duration metric: took 4.025481618s to provisionDockerMachine
	I1120 21:23:50.711864  580632 start.go:293] postStartSetup for "newest-cni-678421" (driver="docker")
	I1120 21:23:50.711878  580632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1120 21:23:50.711941  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1120 21:23:50.711982  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.732036  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.829787  580632 ssh_runner.go:195] Run: cat /etc/os-release
	I1120 21:23:50.833560  580632 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1120 21:23:50.833616  580632 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1120 21:23:50.833627  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/addons for local assets ...
	I1120 21:23:50.833705  580632 filesync.go:126] Scanning /home/jenkins/minikube-integration/21923-250580/.minikube/files for local assets ...
	I1120 21:23:50.833835  580632 filesync.go:149] local asset: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem -> 2540942.pem in /etc/ssl/certs
	I1120 21:23:50.833980  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1120 21:23:50.842564  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:50.861253  580632 start.go:296] duration metric: took 149.369694ms for postStartSetup
	I1120 21:23:50.861339  580632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:23:50.861377  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.880051  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:50.974208  580632 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1120 21:23:50.979126  580632 fix.go:56] duration metric: took 4.728491713s for fixHost
	I1120 21:23:50.979158  580632 start.go:83] releasing machines lock for "newest-cni-678421", held for 4.728546595s
	I1120 21:23:50.979256  580632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-678421
	I1120 21:23:50.998093  580632 ssh_runner.go:195] Run: cat /version.json
	I1120 21:23:50.998117  580632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1120 21:23:50.998142  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:50.998179  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:51.019563  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.019937  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:51.111789  580632 ssh_runner.go:195] Run: systemctl --version
	I1120 21:23:51.174631  580632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1120 21:23:51.210913  580632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1120 21:23:51.216140  580632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1120 21:23:51.216212  580632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1120 21:23:51.225258  580632 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1120 21:23:51.225285  580632 start.go:496] detecting cgroup driver to use...
	I1120 21:23:51.225322  580632 detect.go:190] detected "systemd" cgroup driver on host os
	I1120 21:23:51.225373  580632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1120 21:23:51.239684  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1120 21:23:51.252817  580632 docker.go:218] disabling cri-docker service (if available) ...
	I1120 21:23:51.252873  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1120 21:23:51.267677  580632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1120 21:23:51.280313  580632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1120 21:23:51.359820  580632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1120 21:23:51.440243  580632 docker.go:234] disabling docker service ...
	I1120 21:23:51.440315  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1120 21:23:51.455600  580632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1120 21:23:51.468814  580632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1120 21:23:51.549991  580632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1120 21:23:51.639411  580632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1120 21:23:51.653330  580632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1120 21:23:51.668426  580632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1120 21:23:51.668496  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.678387  580632 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1120 21:23:51.678448  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.687514  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.696617  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.705907  580632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1120 21:23:51.714416  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.724299  580632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.733643  580632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1120 21:23:51.743143  580632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1120 21:23:51.751288  580632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1120 21:23:51.758956  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:51.839688  580632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1120 21:23:51.991719  580632 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1120 21:23:51.991791  580632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1120 21:23:51.995963  580632 start.go:564] Will wait 60s for crictl version
	I1120 21:23:51.996011  580632 ssh_runner.go:195] Run: which crictl
	I1120 21:23:51.999596  580632 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1120 21:23:52.025769  580632 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1120 21:23:52.025844  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.055148  580632 ssh_runner.go:195] Run: crio --version
	I1120 21:23:52.087414  580632 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1120 21:23:52.088512  580632 cli_runner.go:164] Run: docker network inspect newest-cni-678421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1120 21:23:52.106859  580632 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1120 21:23:52.111317  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.124544  580632 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1120 21:23:52.125757  580632 kubeadm.go:884] updating cluster {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1120 21:23:52.125892  580632 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 21:23:52.125953  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.159731  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.159752  580632 crio.go:433] Images already preloaded, skipping extraction
	I1120 21:23:52.159798  580632 ssh_runner.go:195] Run: sudo crictl images --output json
	I1120 21:23:52.187161  580632 crio.go:514] all images are preloaded for cri-o runtime.
	I1120 21:23:52.187185  580632 cache_images.go:86] Images are preloaded, skipping loading
	I1120 21:23:52.187193  580632 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1120 21:23:52.187306  580632 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-678421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1120 21:23:52.187376  580632 ssh_runner.go:195] Run: crio config
	I1120 21:23:52.235170  580632 cni.go:84] Creating CNI manager for ""
	I1120 21:23:52.235200  580632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 21:23:52.235246  580632 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1120 21:23:52.235280  580632 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-678421 NodeName:newest-cni-678421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1120 21:23:52.235426  580632 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-678421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1120 21:23:52.235503  580632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1120 21:23:52.243927  580632 binaries.go:51] Found k8s binaries, skipping transfer
	I1120 21:23:52.244009  580632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1120 21:23:52.252390  580632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1120 21:23:52.265368  580632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1120 21:23:52.278329  580632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1120 21:23:52.292057  580632 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1120 21:23:52.296001  580632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1120 21:23:52.306383  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:52.386821  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:52.412054  580632 certs.go:69] Setting up /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421 for IP: 192.168.103.2
	I1120 21:23:52.412083  580632 certs.go:195] generating shared ca certs ...
	I1120 21:23:52.412101  580632 certs.go:227] acquiring lock for ca certs: {Name:mk7187d12aef4050eb9201220d898d7aa4d772a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:52.412365  580632 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key
	I1120 21:23:52.412416  580632 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key
	I1120 21:23:52.412425  580632 certs.go:257] generating profile certs ...
	I1120 21:23:52.412506  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/client.key
	I1120 21:23:52.412557  580632 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key.596c5ceb
	I1120 21:23:52.412600  580632 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key
	I1120 21:23:52.412708  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem (1338 bytes)
	W1120 21:23:52.412737  580632 certs.go:480] ignoring /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094_empty.pem, impossibly tiny 0 bytes
	I1120 21:23:52.412744  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca-key.pem (1679 bytes)
	I1120 21:23:52.412764  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/ca.pem (1082 bytes)
	I1120 21:23:52.412789  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/cert.pem (1123 bytes)
	I1120 21:23:52.412810  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/certs/key.pem (1675 bytes)
	I1120 21:23:52.412858  580632 certs.go:484] found cert: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem (1708 bytes)
	I1120 21:23:52.413501  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1120 21:23:52.433062  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1120 21:23:52.455200  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1120 21:23:52.474785  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1120 21:23:52.498862  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1120 21:23:52.517609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1120 21:23:52.536537  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1120 21:23:52.554621  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/newest-cni-678421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1120 21:23:52.573916  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1120 21:23:52.592807  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/certs/254094.pem --> /usr/share/ca-certificates/254094.pem (1338 bytes)
	I1120 21:23:52.612609  580632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/ssl/certs/2540942.pem --> /usr/share/ca-certificates/2540942.pem (1708 bytes)
	I1120 21:23:52.631336  580632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1120 21:23:52.644868  580632 ssh_runner.go:195] Run: openssl version
	I1120 21:23:52.651511  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.659232  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1120 21:23:52.666879  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670902  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 20 20:30 /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.670977  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1120 21:23:52.706119  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1120 21:23:52.714315  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.722433  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/254094.pem /etc/ssl/certs/254094.pem
	I1120 21:23:52.730632  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734534  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 20 20:35 /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.734604  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254094.pem
	I1120 21:23:52.769094  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1120 21:23:52.777452  580632 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.785374  580632 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2540942.pem /etc/ssl/certs/2540942.pem
	I1120 21:23:52.793202  580632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797062  580632 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 20 20:35 /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.797113  580632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2540942.pem
	I1120 21:23:52.832297  580632 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1120 21:23:52.840467  580632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1120 21:23:52.844760  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1120 21:23:52.879344  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1120 21:23:52.914405  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1120 21:23:52.957573  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1120 21:23:53.002001  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1120 21:23:53.059315  580632 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1120 21:23:53.118112  580632 kubeadm.go:401] StartCluster: {Name:newest-cni-678421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-678421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 21:23:53.118249  580632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1120 21:23:53.118315  580632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1120 21:23:53.155937  580632 cri.go:89] found id: "e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535"
	I1120 21:23:53.155964  580632 cri.go:89] found id: "844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f"
	I1120 21:23:53.155969  580632 cri.go:89] found id: "8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d"
	I1120 21:23:53.155973  580632 cri.go:89] found id: "37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc"
	I1120 21:23:53.155977  580632 cri.go:89] found id: ""
	I1120 21:23:53.156028  580632 ssh_runner.go:195] Run: sudo runc list -f json
	W1120 21:23:53.169575  580632 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T21:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1120 21:23:53.169666  580632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1120 21:23:53.178572  580632 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1120 21:23:53.178598  580632 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1120 21:23:53.178648  580632 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1120 21:23:53.186477  580632 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:23:53.187131  580632 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-678421" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.187455  580632 kubeconfig.go:62] /home/jenkins/minikube-integration/21923-250580/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-678421" cluster setting kubeconfig missing "newest-cni-678421" context setting]
	I1120 21:23:53.188141  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.189852  580632 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1120 21:23:53.197764  580632 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1120 21:23:53.197798  580632 kubeadm.go:602] duration metric: took 19.193608ms to restartPrimaryControlPlane
	I1120 21:23:53.197808  580632 kubeadm.go:403] duration metric: took 79.708097ms to StartCluster
	I1120 21:23:53.197825  580632 settings.go:142] acquiring lock: {Name:mka092005482936c9872e017c44960f0ffa54ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.197892  580632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:23:53.199030  580632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21923-250580/kubeconfig: {Name:mkf1f1c81410e5e8ee977d1ce97b1cc044f9fa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1120 21:23:53.199301  580632 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1120 21:23:53.199413  580632 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1120 21:23:53.199502  580632 config.go:182] Loaded profile config "newest-cni-678421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:23:53.199513  580632 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-678421"
	I1120 21:23:53.199538  580632 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-678421"
	W1120 21:23:53.199549  580632 addons.go:248] addon storage-provisioner should already be in state true
	I1120 21:23:53.199556  580632 addons.go:70] Setting dashboard=true in profile "newest-cni-678421"
	I1120 21:23:53.199565  580632 addons.go:70] Setting default-storageclass=true in profile "newest-cni-678421"
	I1120 21:23:53.199573  580632 addons.go:239] Setting addon dashboard=true in "newest-cni-678421"
	I1120 21:23:53.199579  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	W1120 21:23:53.199581  580632 addons.go:248] addon dashboard should already be in state true
	I1120 21:23:53.199581  580632 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-678421"
	I1120 21:23:53.199609  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.199914  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200071  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.200090  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.201838  580632 out.go:179] * Verifying Kubernetes components...
	I1120 21:23:53.203715  580632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1120 21:23:53.227682  580632 addons.go:239] Setting addon default-storageclass=true in "newest-cni-678421"
	W1120 21:23:53.227708  580632 addons.go:248] addon default-storageclass should already be in state true
	I1120 21:23:53.227739  580632 host.go:66] Checking if "newest-cni-678421" exists ...
	I1120 21:23:53.228202  580632 cli_runner.go:164] Run: docker container inspect newest-cni-678421 --format={{.State.Status}}
	I1120 21:23:53.228303  580632 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1120 21:23:53.228953  580632 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1120 21:23:53.229725  580632 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.229745  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1120 21:23:53.229800  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.231712  580632 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1120 21:23:53.232850  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1120 21:23:53.232872  580632 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1120 21:23:53.232947  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.265766  580632 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.265850  580632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1120 21:23:53.265935  580632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-678421
	I1120 21:23:53.266314  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.268641  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.294295  580632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/newest-cni-678421/id_rsa Username:docker}
	I1120 21:23:53.343706  580632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1120 21:23:53.357884  580632 api_server.go:52] waiting for apiserver process to appear ...
	I1120 21:23:53.357961  580632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:23:53.370517  580632 api_server.go:72] duration metric: took 171.180002ms to wait for apiserver process to appear ...
	I1120 21:23:53.370547  580632 api_server.go:88] waiting for apiserver healthz status ...
	I1120 21:23:53.370574  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:53.384995  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1120 21:23:53.385021  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1120 21:23:53.387564  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1120 21:23:53.400161  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1120 21:23:53.400191  580632 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1120 21:23:53.410438  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1120 21:23:53.417003  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1120 21:23:53.417034  580632 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1120 21:23:53.431937  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1120 21:23:53.431967  580632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1120 21:23:53.449462  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1120 21:23:53.449491  580632 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1120 21:23:53.468486  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1120 21:23:53.468515  580632 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1120 21:23:53.485129  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1120 21:23:53.485160  580632 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1120 21:23:53.498138  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1120 21:23:53.498163  580632 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1120 21:23:53.513999  580632 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:53.514025  580632 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1120 21:23:53.528983  580632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1120 21:23:54.509181  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.509231  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.509249  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.515250  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1120 21:23:54.515284  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1120 21:23:54.871126  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:54.876266  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:54.876293  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.038413  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650810705s)
	I1120 21:23:55.038453  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.627983684s)
	I1120 21:23:55.038566  580632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.509543165s)
	I1120 21:23:55.040585  580632 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-678421 addons enable metrics-server
	
	I1120 21:23:55.050757  580632 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1120 21:23:55.052024  580632 addons.go:515] duration metric: took 1.852618686s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1120 21:23:55.370859  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.375402  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1120 21:23:55.375429  580632 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1120 21:23:55.871078  580632 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1120 21:23:55.875821  580632 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1120 21:23:55.876995  580632 api_server.go:141] control plane version: v1.34.1
	I1120 21:23:55.877022  580632 api_server.go:131] duration metric: took 2.506467275s to wait for apiserver health ...
	I1120 21:23:55.877035  580632 system_pods.go:43] waiting for kube-system pods to appear ...
	I1120 21:23:55.881011  580632 system_pods.go:59] 8 kube-system pods found
	I1120 21:23:55.881052  580632 system_pods.go:61] "coredns-66bc5c9577-6kdrd" [e092d7c4-5ce3-4731-86e7-711683ff35b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881064  580632 system_pods.go:61] "etcd-newest-cni-678421" [74955e0b-48f8-44e6-99e2-dbf01fedae9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1120 21:23:55.881076  580632 system_pods.go:61] "kindnet-454t9" [feeb8743-b4be-40fb-b110-fa0ff2c8eb0d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1120 21:23:55.881086  580632 system_pods.go:61] "kube-apiserver-newest-cni-678421" [5ebcbd8d-931a-478e-9e92-efe8a955d811] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1120 21:23:55.881102  580632 system_pods.go:61] "kube-controller-manager-newest-cni-678421" [109bdb47-4671-42ba-a925-ae7086ee2550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1120 21:23:55.881111  580632 system_pods.go:61] "kube-proxy-t5jmf" [15b0f18f-00f6-4f9c-9554-0054d1da612b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1120 21:23:55.881120  580632 system_pods.go:61] "kube-scheduler-newest-cni-678421" [a3663dc0-e28d-4a1b-932a-9b300a8472c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1120 21:23:55.881127  580632 system_pods.go:61] "storage-provisioner" [b1959150-9e18-40b7-b710-d7a93b033b46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1120 21:23:55.881138  580632 system_pods.go:74] duration metric: took 4.09635ms to wait for pod list to return data ...
	I1120 21:23:55.881153  580632 default_sa.go:34] waiting for default service account to be created ...
	I1120 21:23:55.883836  580632 default_sa.go:45] found service account: "default"
	I1120 21:23:55.883863  580632 default_sa.go:55] duration metric: took 2.701397ms for default service account to be created ...
	I1120 21:23:55.883875  580632 kubeadm.go:587] duration metric: took 2.684545859s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1120 21:23:55.883891  580632 node_conditions.go:102] verifying NodePressure condition ...
	I1120 21:23:55.886610  580632 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1120 21:23:55.886636  580632 node_conditions.go:123] node cpu capacity is 8
	I1120 21:23:55.886650  580632 node_conditions.go:105] duration metric: took 2.75414ms to run NodePressure ...
	I1120 21:23:55.886662  580632 start.go:242] waiting for startup goroutines ...
	I1120 21:23:55.886668  580632 start.go:247] waiting for cluster config update ...
	I1120 21:23:55.886679  580632 start.go:256] writing updated cluster config ...
	I1120 21:23:55.886967  580632 ssh_runner.go:195] Run: rm -f paused
	I1120 21:23:55.937055  580632 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1120 21:23:55.938658  580632 out.go:179] * Done! kubectl is now configured to use "newest-cni-678421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.787376973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.790850713Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=56fc0576-84fd-490d-8978-3f98a9850d89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.792313053Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=998a4c29-3959-444f-a419-179d5b797259 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.792886133Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.79374391Z" level=info msg="Ran pod sandbox bd649de1461b51dd7032d2bbc181af80adfdecaf5205095bb7a861c187ea7c56 with infra container: kube-system/kindnet-454t9/POD" id=56fc0576-84fd-490d-8978-3f98a9850d89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.793788569Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.794693141Z" level=info msg="Ran pod sandbox c9bd7e872776429ae0d70e0cfe4f324fe0fff756b7598098068f1037b36da853 with infra container: kube-system/kube-proxy-t5jmf/POD" id=998a4c29-3959-444f-a419-179d5b797259 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.795021092Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5fb7d238-0b23-4626-a318-fde5774053cb name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.795670009Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2ef3fc6f-ec71-46e3-8c65-0d5cdbba0538 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.796132948Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8d3432fd-2d3f-4dcb-a5d7-34370db40bfe name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.796580543Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=920015e9-c661-42ca-9aa5-63ea2f976d67 name=/runtime.v1.ImageService/ImageStatus
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797180235Z" level=info msg="Creating container: kube-system/kindnet-454t9/kindnet-cni" id=6dd0408c-89bc-4c5c-a106-5954e065c3e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797295059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797524861Z" level=info msg="Creating container: kube-system/kube-proxy-t5jmf/kube-proxy" id=49a98c38-4add-40c3-8fd7-bb152a2ca3ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.797799321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.802485506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.803080492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.805245685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.805955407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.832508048Z" level=info msg="Created container 3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d: kube-system/kindnet-454t9/kindnet-cni" id=6dd0408c-89bc-4c5c-a106-5954e065c3e6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.833187647Z" level=info msg="Starting container: 3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d" id=83c2a7cc-7ea0-426f-8f5f-e4ce34150fc3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.834884204Z" level=info msg="Started container" PID=1050 containerID=3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d description=kube-system/kindnet-454t9/kindnet-cni id=83c2a7cc-7ea0-426f-8f5f-e4ce34150fc3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd649de1461b51dd7032d2bbc181af80adfdecaf5205095bb7a861c187ea7c56
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.835492494Z" level=info msg="Created container 65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35: kube-system/kube-proxy-t5jmf/kube-proxy" id=49a98c38-4add-40c3-8fd7-bb152a2ca3ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.83607006Z" level=info msg="Starting container: 65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35" id=5a765aef-0bd3-4125-88a5-a0ccda03640c name=/runtime.v1.RuntimeService/StartContainer
	Nov 20 21:23:55 newest-cni-678421 crio[523]: time="2025-11-20T21:23:55.83858167Z" level=info msg="Started container" PID=1051 containerID=65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35 description=kube-system/kube-proxy-t5jmf/kube-proxy id=5a765aef-0bd3-4125-88a5-a0ccda03640c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9bd7e872776429ae0d70e0cfe4f324fe0fff756b7598098068f1037b36da853
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	65abef9f5a4ad       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   c9bd7e8727764       kube-proxy-t5jmf                            kube-system
	3a3122c4d5a98       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   bd649de1461b5       kindnet-454t9                               kube-system
	e94369746f3f1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   63fb7a975a70b       kube-scheduler-newest-cni-678421            kube-system
	844f21c4918c5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   73dd51434acb6       kube-controller-manager-newest-cni-678421   kube-system
	8624913048b6d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   7154c31cbe523       etcd-newest-cni-678421                      kube-system
	37acb2d75f157       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   7fd6ecc2dd973       kube-apiserver-newest-cni-678421            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-678421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-678421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e5f32c933323f8faf93af2b2e6712b52670dd173
	                    minikube.k8s.io/name=newest-cni-678421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_20T21_23_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Nov 2025 21:23:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-678421
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Nov 2025 21:23:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 20 Nov 2025 21:23:54 +0000   Thu, 20 Nov 2025 21:23:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-678421
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                aea61b33-8516-4da2-aaf9-1fdf3bc040c2
	  Boot ID:                    d80995a6-3cf5-4236-8c97-17f242d3f332
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-678421                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-454t9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-678421             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-678421    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-t5jmf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-678421             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node newest-cni-678421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node newest-cni-678421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node newest-cni-678421 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node newest-cni-678421 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-678421 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node newest-cni-678421 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node newest-cni-678421 event: Registered Node newest-cni-678421 in Controller
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-678421 event: Registered Node newest-cni-678421 in Controller
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 62 b9 6f 53 d9 91 26 28 49 cb fc 9e 08 00
	[Nov20 21:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +0.000046] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 7c 77 91 cb 29 08 06
	[ +21.310468] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 23 4d 55 ff 6b 08 06
	[  +0.000611] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 09 d4 5c ed a6 08 06
	[  +8.712826] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe 49 d3 9e f5 79 08 06
	[  +0.068443] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[  +2.799920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 40 be 08 2f ff 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 20 c4 60 3b 7c 08 06
	[Nov20 21:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 d0 0f df 9d 90 08 06
	[  +0.000402] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 9f 38 b8 c7 2e 08 06
	[Nov20 21:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea 76 40 72 83 5a 08 06
	[  +0.000478] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 6b 0f 5c 90 da 08 06
	
	
	==> etcd [8624913048b6d47ef167684dd98c17b1d04167d6b27db5d18f2ad4f4ae28ab6d] <==
	{"level":"warn","ts":"2025-11-20T21:23:53.875052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.881515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.890116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.896551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.902778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.909108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.915443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.921969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.929252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.936153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.951452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.957499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.963936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.971109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.977230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.983817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.990931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:53.997361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.004075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.010051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.020463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.026510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.044998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.051096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-20T21:23:54.057287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:24:02 up  4:06,  0 user,  load average: 3.95, 4.48, 2.97
	Linux newest-cni-678421 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a3122c4d5a982d07ba1501af0a06eea0fb1dc08910d72943138fbe7bbff613d] <==
	I1120 21:23:56.126667       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1120 21:23:56.126984       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1120 21:23:56.127175       1 main.go:148] setting mtu 1500 for CNI 
	I1120 21:23:56.127197       1 main.go:178] kindnetd IP family: "ipv4"
	I1120 21:23:56.127237       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-20T21:23:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1120 21:23:56.330354       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1120 21:23:56.330414       1 controller.go:381] "Waiting for informer caches to sync"
	I1120 21:23:56.330431       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1120 21:23:56.330596       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1120 21:23:56.830903       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1120 21:23:56.830932       1 metrics.go:72] Registering metrics
	I1120 21:23:56.830987       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [37acb2d75f157484be28d0a86ab882db9cf2ffab959db8dfc0546df3b4b438bc] <==
	I1120 21:23:54.586656       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1120 21:23:54.586916       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1120 21:23:54.586808       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1120 21:23:54.586821       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1120 21:23:54.592871       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1120 21:23:54.594729       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1120 21:23:54.603000       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1120 21:23:54.603989       1 aggregator.go:171] initial CRD sync complete...
	I1120 21:23:54.604011       1 autoregister_controller.go:144] Starting autoregister controller
	I1120 21:23:54.604019       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1120 21:23:54.604026       1 cache.go:39] Caches are synced for autoregister controller
	I1120 21:23:54.628858       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1120 21:23:54.629299       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1120 21:23:54.841905       1 controller.go:667] quota admission added evaluator for: namespaces
	I1120 21:23:54.871177       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1120 21:23:54.892025       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1120 21:23:54.899721       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1120 21:23:54.906242       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1120 21:23:54.943395       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.196.53"}
	I1120 21:23:54.953311       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.131.234"}
	I1120 21:23:55.488515       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1120 21:23:58.124116       1 controller.go:667] quota admission added evaluator for: endpoints
	I1120 21:23:58.324396       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1120 21:23:58.474302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1120 21:23:58.525846       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [844f21c4918c5c5a5d09675328ed3fc42fe1b87fa9c41dae3af0da831bfa488f] <==
	I1120 21:23:57.902665       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1120 21:23:57.908950       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1120 21:23:57.915245       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1120 21:23:57.920340       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1120 21:23:57.920398       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1120 21:23:57.920409       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1120 21:23:57.920463       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1120 21:23:57.920487       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1120 21:23:57.920495       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1120 21:23:57.920511       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1120 21:23:57.920565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1120 21:23:57.921077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1120 21:23:57.921915       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1120 21:23:57.921975       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1120 21:23:57.922133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1120 21:23:57.924273       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1120 21:23:57.925399       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1120 21:23:57.927581       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1120 21:23:57.930875       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1120 21:23:57.931001       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1120 21:23:57.931101       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-678421"
	I1120 21:23:57.931203       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1120 21:23:57.933257       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1120 21:23:57.937333       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1120 21:23:57.945445       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [65abef9f5a4adfc9ffb14f61a3446c7017624dbe39408bde069719f788becc35] <==
	I1120 21:23:55.873341       1 server_linux.go:53] "Using iptables proxy"
	I1120 21:23:55.942194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1120 21:23:56.043029       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1120 21:23:56.043064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1120 21:23:56.043169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1120 21:23:56.063501       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1120 21:23:56.063575       1 server_linux.go:132] "Using iptables Proxier"
	I1120 21:23:56.069169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1120 21:23:56.069667       1 server.go:527] "Version info" version="v1.34.1"
	I1120 21:23:56.069712       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:56.071474       1 config.go:403] "Starting serviceCIDR config controller"
	I1120 21:23:56.071519       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1120 21:23:56.071585       1 config.go:200] "Starting service config controller"
	I1120 21:23:56.071596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1120 21:23:56.071624       1 config.go:106] "Starting endpoint slice config controller"
	I1120 21:23:56.071629       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1120 21:23:56.071648       1 config.go:309] "Starting node config controller"
	I1120 21:23:56.071676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1120 21:23:56.071685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1120 21:23:56.171761       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1120 21:23:56.171786       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1120 21:23:56.171813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e94369746f3f1bb3dea5c7eefa0b04af5a66a9a63048f17f7f4c9f0e56eb7535] <==
	I1120 21:23:53.505430       1 serving.go:386] Generated self-signed cert in-memory
	W1120 21:23:54.516798       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1120 21:23:54.516843       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1120 21:23:54.516854       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1120 21:23:54.516865       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1120 21:23:54.545295       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1120 21:23:54.545317       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1120 21:23:54.547123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:54.547170       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1120 21:23:54.547492       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1120 21:23:54.547564       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1120 21:23:54.647371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.593384     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-678421\" already exists" pod="kube-system/kube-scheduler-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.593436     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.600192     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-678421\" already exists" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.600419     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.608400     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-678421\" already exists" pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.608612     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.615631     674 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.615738     674 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.615777     674 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.616136     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-678421\" already exists" pod="kube-system/kube-controller-manager-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: I1120 21:23:54.616747     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.634788     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-678421\" already exists" pod="kube-system/kube-apiserver-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.636165     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-678421\" already exists" pod="kube-system/kube-scheduler-newest-cni-678421"
	Nov 20 21:23:54 newest-cni-678421 kubelet[674]: E1120 21:23:54.636759     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-678421\" already exists" pod="kube-system/etcd-newest-cni-678421"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.479014     674 apiserver.go:52] "Watching apiserver"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.520945     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-lib-modules\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.521037     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-xtables-lock\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.521066     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/feeb8743-b4be-40fb-b110-fa0ff2c8eb0d-cni-cfg\") pod \"kindnet-454t9\" (UID: \"feeb8743-b4be-40fb-b110-fa0ff2c8eb0d\") " pod="kube-system/kindnet-454t9"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.582979     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.622084     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b0f18f-00f6-4f9c-9554-0054d1da612b-lib-modules\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:55 newest-cni-678421 kubelet[674]: I1120 21:23:55.622155     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b0f18f-00f6-4f9c-9554-0054d1da612b-xtables-lock\") pod \"kube-proxy-t5jmf\" (UID: \"15b0f18f-00f6-4f9c-9554-0054d1da612b\") " pod="kube-system/kube-proxy-t5jmf"
	Nov 20 21:23:57 newest-cni-678421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 20 21:23:57 newest-cni-678421 kubelet[674]: I1120 21:23:57.012034     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 20 21:23:57 newest-cni-678421 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 20 21:23:57 newest-cni-678421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-678421 -n newest-cni-678421
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-678421 -n newest-cni-678421: exit status 2 (355.230791ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-678421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb: exit status 1 (66.634832ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6kdrd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-cwjhg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bqvtb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-678421 describe pod coredns-66bc5c9577-6kdrd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-cwjhg kubernetes-dashboard-855c9754f9-bqvtb: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.28s)

                                                
                                    

Test pass (262/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.35
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 12.95
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.89
22 TestOffline 52.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 103.5
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 9.44
48 TestAddons/StoppedEnableDisable 18.56
49 TestCertOptions 33.74
50 TestCertExpiration 216.68
52 TestForceSystemdFlag 30.92
53 TestForceSystemdEnv 29.41
58 TestErrorSpam/setup 23.83
59 TestErrorSpam/start 0.7
60 TestErrorSpam/status 1.01
61 TestErrorSpam/pause 6.88
62 TestErrorSpam/unpause 5.38
63 TestErrorSpam/stop 12.68
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 42.19
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.22
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.29
75 TestFunctional/serial/CacheCmd/cache/add_local 2.32
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.12
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 39.39
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.19
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 8.27
91 TestFunctional/parallel/DryRun 0.41
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.09
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 25.06
101 TestFunctional/parallel/SSHCmd 0.68
102 TestFunctional/parallel/CpCmd 1.9
103 TestFunctional/parallel/MySQL 16.28
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.91
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.91
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 1.52
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.42
122 TestFunctional/parallel/ImageCommands/Setup 1.78
123 TestFunctional/parallel/Version/short 0.09
124 TestFunctional/parallel/Version/components 0.6
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.28
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
145 TestFunctional/parallel/ProfileCmd/profile_list 0.52
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
147 TestFunctional/parallel/MountCmd/any-port 7.68
148 TestFunctional/parallel/MountCmd/specific-port 1.95
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 114.08
163 TestMultiControlPlane/serial/DeployApp 5.31
164 TestMultiControlPlane/serial/PingHostFromPods 1.09
165 TestMultiControlPlane/serial/AddWorkerNode 28.13
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.53
169 TestMultiControlPlane/serial/StopSecondaryNode 18.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.96
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.58
176 TestMultiControlPlane/serial/StopCluster 38.31
177 TestMultiControlPlane/serial/RestartCluster 55.22
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 41.95
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
185 TestJSONOutput/start/Command 38.49
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.24
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 36.34
211 TestKicCustomNetwork/use_default_bridge_network 24.21
212 TestKicExistingNetwork 27.27
213 TestKicCustomSubnet 25.08
214 TestKicStaticIP 24.95
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.49
219 TestMountStart/serial/StartWithMountFirst 5.88
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.83
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 8.2
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 65.84
231 TestMultiNode/serial/DeployApp2Nodes 4.68
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 22.75
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 9.91
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 7.57
239 TestMultiNode/serial/RestartKeepsNodes 77.13
240 TestMultiNode/serial/DeleteNode 5.28
241 TestMultiNode/serial/StopMultiNode 30.46
242 TestMultiNode/serial/RestartMultiNode 45.51
243 TestMultiNode/serial/ValidateNameConflict 23.66
248 TestPreload 116.01
250 TestScheduledStopUnix 97.35
253 TestInsufficientStorage 12.47
254 TestRunningBinaryUpgrade 70.58
256 TestKubernetesUpgrade 312.9
257 TestMissingContainerUpgrade 120.06
259 TestPause/serial/Start 49.41
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/StartWithK8s 31.71
270 TestNetworkPlugins/group/false 7.28
274 TestNoKubernetes/serial/StartWithStopK8s 16.74
275 TestNoKubernetes/serial/Start 9.57
276 TestPause/serial/SecondStartNoReconfiguration 8.12
278 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
280 TestNoKubernetes/serial/ProfileList 2.14
281 TestNoKubernetes/serial/Stop 1.33
282 TestNoKubernetes/serial/StartNoArgs 10.06
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
284 TestStoppedBinaryUpgrade/Setup 2.7
285 TestStoppedBinaryUpgrade/Upgrade 41.64
293 TestNetworkPlugins/group/auto/Start 40.42
294 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
295 TestNetworkPlugins/group/kindnet/Start 40.63
296 TestNetworkPlugins/group/auto/KubeletFlags 0.31
297 TestNetworkPlugins/group/auto/NetCatPod 9.19
298 TestNetworkPlugins/group/auto/DNS 0.11
299 TestNetworkPlugins/group/auto/Localhost 0.1
300 TestNetworkPlugins/group/auto/HairPin 0.11
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/flannel/Start 51.77
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
304 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
305 TestNetworkPlugins/group/enable-default-cni/Start 40.86
306 TestNetworkPlugins/group/kindnet/DNS 0.12
307 TestNetworkPlugins/group/kindnet/Localhost 0.09
308 TestNetworkPlugins/group/kindnet/HairPin 0.1
309 TestNetworkPlugins/group/bridge/Start 41.06
310 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
311 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.21
312 TestNetworkPlugins/group/flannel/ControllerPod 6.01
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
317 TestNetworkPlugins/group/flannel/NetCatPod 8.17
318 TestNetworkPlugins/group/flannel/DNS 0.14
319 TestNetworkPlugins/group/flannel/Localhost 0.1
320 TestNetworkPlugins/group/flannel/HairPin 0.09
321 TestNetworkPlugins/group/calico/Start 53.82
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
323 TestNetworkPlugins/group/bridge/NetCatPod 10.21
324 TestNetworkPlugins/group/bridge/DNS 0.13
325 TestNetworkPlugins/group/bridge/Localhost 0.1
326 TestNetworkPlugins/group/bridge/HairPin 0.11
327 TestNetworkPlugins/group/custom-flannel/Start 60.07
329 TestStartStop/group/old-k8s-version/serial/FirstStart 54.38
331 TestStartStop/group/no-preload/serial/FirstStart 58.02
332 TestNetworkPlugins/group/calico/ControllerPod 6.01
333 TestNetworkPlugins/group/calico/KubeletFlags 0.49
334 TestNetworkPlugins/group/calico/NetCatPod 8.24
335 TestNetworkPlugins/group/calico/DNS 0.15
336 TestNetworkPlugins/group/calico/Localhost 0.11
337 TestNetworkPlugins/group/calico/HairPin 0.11
338 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
339 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.22
340 TestNetworkPlugins/group/custom-flannel/DNS 0.16
341 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
342 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
343 TestStartStop/group/old-k8s-version/serial/DeployApp 9.33
345 TestStartStop/group/embed-certs/serial/FirstStart 40.58
347 TestStartStop/group/old-k8s-version/serial/Stop 16.31
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.03
350 TestStartStop/group/no-preload/serial/DeployApp 9.51
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
352 TestStartStop/group/old-k8s-version/serial/SecondStart 49.11
354 TestStartStop/group/no-preload/serial/Stop 18.31
355 TestStartStop/group/embed-certs/serial/DeployApp 9.22
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
358 TestStartStop/group/no-preload/serial/SecondStart 46.07
359 TestStartStop/group/embed-certs/serial/Stop 17.04
360 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
362 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.54
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
364 TestStartStop/group/embed-certs/serial/SecondStart 48.33
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.89
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
372 TestStartStop/group/newest-cni/serial/FirstStart 29.68
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
379 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/Stop 8.04
382 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
386 TestStartStop/group/newest-cni/serial/SecondStart 10.33
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (12.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-460922 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-460922 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.348853506s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1120 20:29:47.087807  254094 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1120 20:29:47.087907  254094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-460922
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-460922: exit status 85 (75.357837ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-460922 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-460922 │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:29:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:29:34.793791  254106 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:29:34.794093  254106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:29:34.794104  254106 out.go:374] Setting ErrFile to fd 2...
	I1120 20:29:34.794111  254106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:29:34.794332  254106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	W1120 20:29:34.794491  254106 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21923-250580/.minikube/config/config.json: open /home/jenkins/minikube-integration/21923-250580/.minikube/config/config.json: no such file or directory
	I1120 20:29:34.795049  254106 out.go:368] Setting JSON to true
	I1120 20:29:34.795941  254106 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11517,"bootTime":1763659058,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:29:34.796038  254106 start.go:143] virtualization: kvm guest
	I1120 20:29:34.798091  254106 out.go:99] [download-only-460922] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1120 20:29:34.798247  254106 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball: no such file or directory
	I1120 20:29:34.798314  254106 notify.go:221] Checking for updates...
	I1120 20:29:34.799537  254106 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:29:34.800850  254106 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:29:34.802074  254106 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:29:34.803339  254106 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:29:34.804568  254106 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1120 20:29:34.806623  254106 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:29:34.806893  254106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:29:34.831077  254106 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:29:34.831199  254106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:29:34.894248  254106 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-20 20:29:34.883627151 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:29:34.894364  254106 docker.go:319] overlay module found
	I1120 20:29:34.896055  254106 out.go:99] Using the docker driver based on user configuration
	I1120 20:29:34.896085  254106 start.go:309] selected driver: docker
	I1120 20:29:34.896094  254106 start.go:930] validating driver "docker" against <nil>
	I1120 20:29:34.896225  254106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:29:34.956419  254106 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-20 20:29:34.94688619 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:29:34.956654  254106 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:29:34.957438  254106 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1120 20:29:34.957681  254106 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:29:34.959423  254106 out.go:171] Using Docker driver with root privileges
	I1120 20:29:34.961182  254106 cni.go:84] Creating CNI manager for ""
	I1120 20:29:34.961287  254106 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:29:34.961301  254106 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:29:34.961416  254106 start.go:353] cluster config:
	{Name:download-only-460922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-460922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:29:34.963182  254106 out.go:99] Starting "download-only-460922" primary control-plane node in "download-only-460922" cluster
	I1120 20:29:34.963209  254106 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:29:34.964503  254106 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:29:34.964549  254106 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 20:29:34.964640  254106 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:29:34.983578  254106 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:29:34.983781  254106 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 20:29:34.983883  254106 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:29:35.439777  254106 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1120 20:29:35.439817  254106 cache.go:65] Caching tarball of preloaded images
	I1120 20:29:35.440016  254106 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1120 20:29:35.442071  254106 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1120 20:29:35.442101  254106 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1120 20:29:35.540747  254106 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1120 20:29:35.540866  254106 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1120 20:29:43.457826  254106 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	
	
	* The control-plane node download-only-460922 host does not exist
	  To start a cluster, run: "minikube start -p download-only-460922"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-460922
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-839800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-839800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.951760706s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1120 20:30:00.497047  254094 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1120 20:30:00.497088  254094 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-839800
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-839800: exit status 85 (77.715525ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-460922 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-460922 │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │ 20 Nov 25 20:29 UTC │
	│ delete  │ -p download-only-460922                                                                                                                                                   │ download-only-460922 │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │ 20 Nov 25 20:29 UTC │
	│ start   │ -o=json --download-only -p download-only-839800 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-839800 │ jenkins │ v1.37.0 │ 20 Nov 25 20:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/20 20:29:47
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1120 20:29:47.597157  254495 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:29:47.597432  254495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:29:47.597443  254495 out.go:374] Setting ErrFile to fd 2...
	I1120 20:29:47.597448  254495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:29:47.597655  254495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:29:47.598105  254495 out.go:368] Setting JSON to true
	I1120 20:29:47.598959  254495 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":11530,"bootTime":1763659058,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:29:47.599056  254495 start.go:143] virtualization: kvm guest
	I1120 20:29:47.600926  254495 out.go:99] [download-only-839800] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:29:47.601122  254495 notify.go:221] Checking for updates...
	I1120 20:29:47.602371  254495 out.go:171] MINIKUBE_LOCATION=21923
	I1120 20:29:47.603711  254495 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:29:47.605036  254495 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:29:47.606193  254495 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:29:47.607343  254495 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1120 20:29:47.609736  254495 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1120 20:29:47.610004  254495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:29:47.633915  254495 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:29:47.634060  254495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:29:47.693868  254495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-20 20:29:47.682908868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:29:47.693985  254495 docker.go:319] overlay module found
	I1120 20:29:47.695777  254495 out.go:99] Using the docker driver based on user configuration
	I1120 20:29:47.695814  254495 start.go:309] selected driver: docker
	I1120 20:29:47.695839  254495 start.go:930] validating driver "docker" against <nil>
	I1120 20:29:47.695963  254495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:29:47.755157  254495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-20 20:29:47.745526787 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:29:47.755359  254495 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1120 20:29:47.755838  254495 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1120 20:29:47.756013  254495 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1120 20:29:47.757632  254495 out.go:171] Using Docker driver with root privileges
	I1120 20:29:47.758686  254495 cni.go:84] Creating CNI manager for ""
	I1120 20:29:47.758754  254495 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1120 20:29:47.758769  254495 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1120 20:29:47.758832  254495 start.go:353] cluster config:
	{Name:download-only-839800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-839800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:29:47.761407  254495 out.go:99] Starting "download-only-839800" primary control-plane node in "download-only-839800" cluster
	I1120 20:29:47.761435  254495 cache.go:134] Beginning downloading kic base image for docker with crio
	I1120 20:29:47.762627  254495 out.go:99] Pulling base image v0.0.48-1763507788-21924 ...
	I1120 20:29:47.762665  254495 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:29:47.762698  254495 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1120 20:29:47.783046  254495 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1120 20:29:47.783170  254495 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1120 20:29:47.783191  254495 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1120 20:29:47.783196  254495 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1120 20:29:47.783208  254495 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1120 20:29:48.608952  254495 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1120 20:29:48.609004  254495 cache.go:65] Caching tarball of preloaded images
	I1120 20:29:48.609198  254495 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1120 20:29:48.611016  254495 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1120 20:29:48.611040  254495 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1120 20:29:48.710311  254495 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1120 20:29:48.710372  254495 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21923-250580/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-839800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-839800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-839800
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-822958 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-822958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-822958
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.89s)

                                                
                                                
=== RUN   TestBinaryMirror
I1120 20:30:01.691880  254094 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-393068 --alsologtostderr --binary-mirror http://127.0.0.1:39031 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-393068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-393068
--- PASS: TestBinaryMirror (0.89s)

                                                
                                    
x
+
TestOffline (52.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-735987 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-735987 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (48.901153301s)
helpers_test.go:175: Cleaning up "offline-crio-735987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-735987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-735987: (3.516864221s)
--- PASS: TestOffline (52.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-658933
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-658933: exit status 85 (64.94367ms)

                                                
                                                
-- stdout --
	* Profile "addons-658933" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-658933"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-658933
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-658933: exit status 85 (65.91288ms)

                                                
                                                
-- stdout --
	* Profile "addons-658933" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-658933"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (103.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-658933 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-658933 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m43.499277204s)
--- PASS: TestAddons/Setup (103.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-658933 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-658933 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-658933 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-658933 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [28a23eb3-1008-4a65-b8d8-fe27b2d8b7e4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00420358s
addons_test.go:694: (dbg) Run:  kubectl --context addons-658933 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-658933 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-658933 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-658933
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-658933: (18.264345682s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-658933
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-658933
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-658933
--- PASS: TestAddons/StoppedEnableDisable (18.56s)

                                                
                                    
x
+
TestCertOptions (33.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-866173 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-866173 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.114335884s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-866173 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-866173 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-866173 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-866173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-866173
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-866173: (3.821533504s)
--- PASS: TestCertOptions (33.74s)

                                                
                                    
x
+
TestCertExpiration (216.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-118194 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-118194 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.988749948s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-118194 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-118194 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.989242658s)
helpers_test.go:175: Cleaning up "cert-expiration-118194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-118194
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-118194: (4.703631131s)
--- PASS: TestCertExpiration (216.68s)

                                                
                                    
x
+
TestForceSystemdFlag (30.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-687992 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-687992 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.742868826s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-687992 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-687992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-687992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-687992: (2.883079417s)
--- PASS: TestForceSystemdFlag (30.92s)

                                                
                                    
x
+
TestForceSystemdEnv (29.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-267271 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-267271 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.820890458s)
helpers_test.go:175: Cleaning up "force-systemd-env-267271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-267271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-267271: (2.592715591s)
--- PASS: TestForceSystemdEnv (29.41s)

                                                
                                    
x
+
TestErrorSpam/setup (23.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-200442 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-200442 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-200442 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-200442 --driver=docker  --container-runtime=crio: (23.833912944s)
--- PASS: TestErrorSpam/setup (23.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (6.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause: exit status 80 (2.35233361s)

                                                
                                                
-- stdout --
	* Pausing node nospam-200442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:35:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause: exit status 80 (2.313314382s)

                                                
                                                
-- stdout --
	* Pausing node nospam-200442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:35:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause: exit status 80 (2.21790371s)

                                                
                                                
-- stdout --
	* Pausing node nospam-200442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:35:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause: exit status 80 (1.928390203s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-200442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:35:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause: exit status 80 (1.717358792s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-200442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:35:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause: exit status 80 (1.731693301s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-200442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-20T20:35:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.38s)

                                                
                                    
x
+
TestErrorSpam/stop (12.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 stop: (12.466490349s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200442 --log_dir /tmp/nospam-200442 stop
--- PASS: TestErrorSpam/stop (12.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21923-250580/.minikube/files/etc/test/nested/copy/254094/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-041399 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-041399 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.192946054s)
--- PASS: TestFunctional/serial/StartWithProxy (42.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1120 20:36:37.602840  254094 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-041399 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-041399 --alsologtostderr -v=8: (6.222338628s)
functional_test.go:678: soft start took 6.223158444s for "functional-041399" cluster.
I1120 20:36:43.825622  254094 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-041399 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 cache add registry.k8s.io/pause:3.1: (1.407472873s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cache add registry.k8s.io/pause:3.3
E1120 20:36:46.756299  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:46.762789  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:46.774204  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:46.795752  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:46.837181  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 cache add registry.k8s.io/pause:3.3: (1.48982192s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cache add registry.k8s.io/pause:latest
E1120 20:36:46.919024  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:47.080372  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:47.402152  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:36:48.044208  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 cache add registry.k8s.io/pause:latest: (1.391773733s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-041399 /tmp/TestFunctionalserialCacheCmdcacheadd_local903276780/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cache add minikube-local-cache-test:functional-041399
E1120 20:36:49.325791  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 cache add minikube-local-cache-test:functional-041399: (1.960009595s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cache delete minikube-local-cache-test:functional-041399
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-041399
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.225728ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cache reload
E1120 20:36:51.888016  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 cache reload: (1.214024314s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 kubectl -- --context functional-041399 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-041399 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-041399 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1120 20:36:57.009788  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:37:07.251970  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:37:27.733559  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-041399 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.389644885s)
functional_test.go:776: restart took 39.389767892s for "functional-041399" cluster.
I1120 20:37:32.916329  254094 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (39.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-041399 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 logs: (1.237590029s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 logs --file /tmp/TestFunctionalserialLogsFileCmd3102842073/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 logs --file /tmp/TestFunctionalserialLogsFileCmd3102842073/001/logs.txt: (1.24470653s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-041399 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-041399
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-041399: exit status 115 (361.15994ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32125 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-041399 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 config get cpus: exit status 14 (100.296388ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 config get cpus: exit status 14 (81.319661ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-041399 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-041399 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 293339: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-041399 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-041399 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (172.734461ms)

                                                
                                                
-- stdout --
	* [functional-041399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:38:11.443434  292842 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:38:11.443699  292842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:38:11.443710  292842 out.go:374] Setting ErrFile to fd 2...
	I1120 20:38:11.443714  292842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:38:11.443947  292842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:38:11.444390  292842 out.go:368] Setting JSON to false
	I1120 20:38:11.445365  292842 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12033,"bootTime":1763659058,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:38:11.445461  292842 start.go:143] virtualization: kvm guest
	I1120 20:38:11.447622  292842 out.go:179] * [functional-041399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 20:38:11.448940  292842 notify.go:221] Checking for updates...
	I1120 20:38:11.448995  292842 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:38:11.450086  292842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:38:11.451317  292842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:38:11.452601  292842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:38:11.453719  292842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:38:11.454819  292842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:38:11.456149  292842 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:38:11.456649  292842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:38:11.481062  292842 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:38:11.481229  292842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:38:11.541832  292842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-20 20:38:11.530609482 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:38:11.541978  292842 docker.go:319] overlay module found
	I1120 20:38:11.547353  292842 out.go:179] * Using the docker driver based on existing profile
	I1120 20:38:11.548463  292842 start.go:309] selected driver: docker
	I1120 20:38:11.548495  292842 start.go:930] validating driver "docker" against &{Name:functional-041399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-041399 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:38:11.548622  292842 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:38:11.550337  292842 out.go:203] 
	W1120 20:38:11.551568  292842 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1120 20:38:11.552608  292842 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-041399 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-041399 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-041399 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (178.900745ms)

                                                
                                                
-- stdout --
	* [functional-041399] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:38:11.855961  293055 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:38:11.856241  293055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:38:11.856252  293055 out.go:374] Setting ErrFile to fd 2...
	I1120 20:38:11.856256  293055 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:38:11.856576  293055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:38:11.857035  293055 out.go:368] Setting JSON to false
	I1120 20:38:11.857961  293055 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":12034,"bootTime":1763659058,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 20:38:11.858065  293055 start.go:143] virtualization: kvm guest
	I1120 20:38:11.859952  293055 out.go:179] * [functional-041399] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1120 20:38:11.861323  293055 notify.go:221] Checking for updates...
	I1120 20:38:11.861347  293055 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 20:38:11.862618  293055 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 20:38:11.864251  293055 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 20:38:11.865525  293055 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 20:38:11.866889  293055 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 20:38:11.868252  293055 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 20:38:11.869944  293055 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:38:11.870490  293055 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 20:38:11.895674  293055 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 20:38:11.895851  293055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:38:11.958996  293055 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-20 20:38:11.94832152 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:38:11.959097  293055 docker.go:319] overlay module found
	I1120 20:38:11.965194  293055 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1120 20:38:11.966444  293055 start.go:309] selected driver: docker
	I1120 20:38:11.966459  293055 start.go:930] validating driver "docker" against &{Name:functional-041399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-041399 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1120 20:38:11.966536  293055 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 20:38:11.968144  293055 out.go:203] 
	W1120 20:38:11.969275  293055 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1120 20:38:11.970401  293055 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [dff1ab94-19ca-46a8-bf72-0707cffe0884] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004040177s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-041399 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-041399 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-041399 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-041399 apply -f testdata/storage-provisioner/pod.yaml
I1120 20:37:56.275282  254094 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6bc76a8f-29ff-4a98-8db6-e46c702ef818] Pending
helpers_test.go:352: "sp-pod" [6bc76a8f-29ff-4a98-8db6-e46c702ef818] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6bc76a8f-29ff-4a98-8db6-e46c702ef818] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004937921s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-041399 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-041399 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-041399 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7a5aa489-4eb2-4dcd-94c5-7b5449b8decf] Pending
helpers_test.go:352: "sp-pod" [7a5aa489-4eb2-4dcd-94c5-7b5449b8decf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7a5aa489-4eb2-4dcd-94c5-7b5449b8decf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003711433s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-041399 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh -n functional-041399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cp functional-041399:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2392736933/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh -n functional-041399 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh -n functional-041399 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-041399 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-r5gnk" [03938f9b-dc3b-4720-b756-493e61f492c3] Pending
helpers_test.go:352: "mysql-5bb876957f-r5gnk" [03938f9b-dc3b-4720-b756-493e61f492c3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-r5gnk" [03938f9b-dc3b-4720-b756-493e61f492c3] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.005107301s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-041399 exec mysql-5bb876957f-r5gnk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-041399 exec mysql-5bb876957f-r5gnk -- mysql -ppassword -e "show databases;": exit status 1 (99.068076ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1120 20:37:55.682518  254094 retry.go:31] will retry after 871.235539ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-041399 exec mysql-5bb876957f-r5gnk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (16.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/254094/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /etc/test/nested/copy/254094/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/254094.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /etc/ssl/certs/254094.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/254094.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /usr/share/ca-certificates/254094.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2540942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /etc/ssl/certs/2540942.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2540942.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /usr/share/ca-certificates/2540942.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-041399 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh "sudo systemctl is-active docker": exit status 1 (356.701458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh "sudo systemctl is-active containerd": exit status 1 (352.226329ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-041399 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-041399 image ls --format short --alsologtostderr:
I1120 20:38:17.222819  293902 out.go:360] Setting OutFile to fd 1 ...
I1120 20:38:17.222953  293902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:17.222963  293902 out.go:374] Setting ErrFile to fd 2...
I1120 20:38:17.222970  293902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:17.223308  293902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
I1120 20:38:17.224120  293902 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:17.224275  293902 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:17.224811  293902 cli_runner.go:164] Run: docker container inspect functional-041399 --format={{.State.Status}}
I1120 20:38:17.248084  293902 ssh_runner.go:195] Run: systemctl --version
I1120 20:38:17.248137  293902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-041399
I1120 20:38:17.272713  293902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/functional-041399/id_rsa Username:docker}
I1120 20:38:17.378204  293902 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-041399 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-041399 image ls --format table --alsologtostderr:
I1120 20:38:20.532485  294303 out.go:360] Setting OutFile to fd 1 ...
I1120 20:38:20.532603  294303 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:20.532609  294303 out.go:374] Setting ErrFile to fd 2...
I1120 20:38:20.532613  294303 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:20.532819  294303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
I1120 20:38:20.533417  294303 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:20.533519  294303 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:20.533895  294303 cli_runner.go:164] Run: docker container inspect functional-041399 --format={{.State.Status}}
I1120 20:38:20.553387  294303 ssh_runner.go:195] Run: systemctl --version
I1120 20:38:20.553441  294303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-041399
I1120 20:38:20.572050  294303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/functional-041399/id_rsa Username:docker}
I1120 20:38:20.668544  294303 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-041399 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.i
o/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f9
2e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47
306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","
registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76
049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pa
use:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-041399 image ls --format json --alsologtostderr:
I1120 20:38:20.303085  294250 out.go:360] Setting OutFile to fd 1 ...
I1120 20:38:20.303198  294250 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:20.303206  294250 out.go:374] Setting ErrFile to fd 2...
I1120 20:38:20.303210  294250 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:20.303390  294250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
I1120 20:38:20.304029  294250 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:20.304120  294250 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:20.304555  294250 cli_runner.go:164] Run: docker container inspect functional-041399 --format={{.State.Status}}
I1120 20:38:20.323979  294250 ssh_runner.go:195] Run: systemctl --version
I1120 20:38:20.324026  294250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-041399
I1120 20:38:20.341953  294250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/functional-041399/id_rsa Username:docker}
I1120 20:38:20.437398  294250 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 image ls --format yaml --alsologtostderr: (1.523886554s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-041399 image ls --format yaml --alsologtostderr:
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-041399 image ls --format yaml --alsologtostderr:
I1120 20:38:17.498377  293956 out.go:360] Setting OutFile to fd 1 ...
I1120 20:38:17.498701  293956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:17.498717  293956 out.go:374] Setting ErrFile to fd 2...
I1120 20:38:17.498723  293956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:17.499024  293956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
I1120 20:38:17.499799  293956 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:17.499944  293956 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:17.500567  293956 cli_runner.go:164] Run: docker container inspect functional-041399 --format={{.State.Status}}
I1120 20:38:17.523463  293956 ssh_runner.go:195] Run: systemctl --version
I1120 20:38:17.523529  293956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-041399
I1120 20:38:17.545813  293956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/functional-041399/id_rsa Username:docker}
I1120 20:38:17.646121  293956 ssh_runner.go:195] Run: sudo crictl images --output json
I1120 20:38:18.942423  293956 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.296263006s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh pgrep buildkitd: exit status 1 (295.295238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image build -t localhost/my-image:functional-041399 testdata/build --alsologtostderr
2025/11/20 20:38:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 image build -t localhost/my-image:functional-041399 testdata/build --alsologtostderr: (2.893328991s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-041399 image build -t localhost/my-image:functional-041399 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 573735c144d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-041399
--> 095b70afd68
Successfully tagged localhost/my-image:functional-041399
095b70afd6897598fa49a26912822e08a3db31b0fdaf011670bad1a44338b637
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-041399 image build -t localhost/my-image:functional-041399 testdata/build --alsologtostderr:
I1120 20:38:19.302719  294178 out.go:360] Setting OutFile to fd 1 ...
I1120 20:38:19.302977  294178 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:19.302985  294178 out.go:374] Setting ErrFile to fd 2...
I1120 20:38:19.302989  294178 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1120 20:38:19.303183  294178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
I1120 20:38:19.303831  294178 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:19.304515  294178 config.go:182] Loaded profile config "functional-041399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1120 20:38:19.304935  294178 cli_runner.go:164] Run: docker container inspect functional-041399 --format={{.State.Status}}
I1120 20:38:19.323751  294178 ssh_runner.go:195] Run: systemctl --version
I1120 20:38:19.323797  294178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-041399
I1120 20:38:19.341111  294178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/functional-041399/id_rsa Username:docker}
I1120 20:38:19.436833  294178 build_images.go:162] Building image from path: /tmp/build.2576689714.tar
I1120 20:38:19.436908  294178 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1120 20:38:19.444908  294178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2576689714.tar
I1120 20:38:19.448636  294178 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2576689714.tar: stat -c "%s %y" /var/lib/minikube/build/build.2576689714.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2576689714.tar': No such file or directory
I1120 20:38:19.448672  294178 ssh_runner.go:362] scp /tmp/build.2576689714.tar --> /var/lib/minikube/build/build.2576689714.tar (3072 bytes)
I1120 20:38:19.466902  294178 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2576689714
I1120 20:38:19.474833  294178 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2576689714 -xf /var/lib/minikube/build/build.2576689714.tar
I1120 20:38:19.483027  294178 crio.go:315] Building image: /var/lib/minikube/build/build.2576689714
I1120 20:38:19.483098  294178 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-041399 /var/lib/minikube/build/build.2576689714 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1120 20:38:22.115876  294178 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-041399 /var/lib/minikube/build/build.2576689714 --cgroup-manager=cgroupfs: (2.632749578s)
I1120 20:38:22.115932  294178 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2576689714
I1120 20:38:22.124415  294178 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2576689714.tar
I1120 20:38:22.132254  294178 build_images.go:218] Built localhost/my-image:functional-041399 from /tmp/build.2576689714.tar
I1120 20:38:22.132290  294178 build_images.go:134] succeeded building to: functional-041399
I1120 20:38:22.132297  294178 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls
E1120 20:39:30.616483  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:41:46.755439  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:42:14.458379  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 20:46:46.755057  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.763303026s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-041399
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-041399 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-041399 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-041399 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 287893: os: process already finished
helpers_test.go:525: unable to kill pid 287678: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-041399 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-041399 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-041399 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6c00edeb-bc99-494b-b996-042df937ac47] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6c00edeb-bc99-494b-b996-042df937ac47] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003778489s
I1120 20:37:57.066139  254094 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image rm kicbase/echo-server:functional-041399 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-041399 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.61.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-041399 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "434.420099ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "83.077752ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "420.135713ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "79.344993ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdany-port3803868370/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763671079891247450" to /tmp/TestFunctionalparallelMountCmdany-port3803868370/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763671079891247450" to /tmp/TestFunctionalparallelMountCmdany-port3803868370/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763671079891247450" to /tmp/TestFunctionalparallelMountCmdany-port3803868370/001/test-1763671079891247450
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.173724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:38:00.219733  254094 retry.go:31] will retry after 343.756941ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 20 20:37 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 20 20:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 20 20:37 test-1763671079891247450
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh cat /mount-9p/test-1763671079891247450
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-041399 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1df2688d-65c7-48b7-a832-ad6963b4fd35] Pending
helpers_test.go:352: "busybox-mount" [1df2688d-65c7-48b7-a832-ad6963b4fd35] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1df2688d-65c7-48b7-a832-ad6963b4fd35] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1df2688d-65c7-48b7-a832-ad6963b4fd35] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003173473s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-041399 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdany-port3803868370/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdspecific-port511597593/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.644962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:38:07.872670  254094 retry.go:31] will retry after 581.826572ms: exit status 1
I1120 20:38:07.887147  254094 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T /mount-9p | grep 9p"
E1120 20:38:08.694943  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdspecific-port511597593/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh "sudo umount -f /mount-9p": exit status 1 (281.140724ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-041399 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdspecific-port511597593/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T" /mount1: exit status 1 (353.997344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1120 20:38:09.867959  254094 retry.go:31] will retry after 615.791235ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-041399 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-041399 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3793640285/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 service list: (1.702548772s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-041399 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-041399 service list -o json: (1.705355249s)
functional_test.go:1504: Took "1.705455056s" to run "out/minikube-linux-amd64 -p functional-041399 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-041399
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-041399
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-041399
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m53.324328014s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (114.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 kubectl -- rollout status deployment/busybox: (3.315595441s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-58ttm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-94vcx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-rsl29 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-58ttm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-94vcx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-rsl29 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-58ttm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-94vcx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-rsl29 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-58ttm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-58ttm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-94vcx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-94vcx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-rsl29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 kubectl -- exec busybox-7b57f96db7-rsl29 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 node add --alsologtostderr -v 5: (27.205891103s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-922218 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp testdata/cp-test.txt ha-922218:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1859247620/001/cp-test_ha-922218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218:/home/docker/cp-test.txt ha-922218-m02:/home/docker/cp-test_ha-922218_ha-922218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test_ha-922218_ha-922218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218:/home/docker/cp-test.txt ha-922218-m03:/home/docker/cp-test_ha-922218_ha-922218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test_ha-922218_ha-922218-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218:/home/docker/cp-test.txt ha-922218-m04:/home/docker/cp-test_ha-922218_ha-922218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test_ha-922218_ha-922218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp testdata/cp-test.txt ha-922218-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1859247620/001/cp-test_ha-922218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m02:/home/docker/cp-test.txt ha-922218:/home/docker/cp-test_ha-922218-m02_ha-922218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test_ha-922218-m02_ha-922218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m02:/home/docker/cp-test.txt ha-922218-m03:/home/docker/cp-test_ha-922218-m02_ha-922218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test_ha-922218-m02_ha-922218-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m02:/home/docker/cp-test.txt ha-922218-m04:/home/docker/cp-test_ha-922218-m02_ha-922218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test_ha-922218-m02_ha-922218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp testdata/cp-test.txt ha-922218-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1859247620/001/cp-test_ha-922218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt ha-922218:/home/docker/cp-test_ha-922218-m03_ha-922218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt ha-922218-m02:/home/docker/cp-test_ha-922218-m03_ha-922218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m03:/home/docker/cp-test.txt ha-922218-m04:/home/docker/cp-test_ha-922218-m03_ha-922218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test_ha-922218-m03_ha-922218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp testdata/cp-test.txt ha-922218-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1859247620/001/cp-test_ha-922218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218:/home/docker/cp-test_ha-922218-m04_ha-922218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218 "sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218-m02:/home/docker/cp-test_ha-922218-m04_ha-922218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m02 "sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 cp ha-922218-m04:/home/docker/cp-test.txt ha-922218-m03:/home/docker/cp-test_ha-922218-m04_ha-922218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 ssh -n ha-922218-m03 "sudo cat /home/docker/cp-test_ha-922218-m04_ha-922218-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 node stop m02 --alsologtostderr -v 5: (18.157192689s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5: exit status 7 (720.57711ms)

                                                
                                                
-- stdout --
	ha-922218
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-922218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-922218-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-922218-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:51:08.421308  318517 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:51:08.421597  318517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:51:08.421608  318517 out.go:374] Setting ErrFile to fd 2...
	I1120 20:51:08.421612  318517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:51:08.421870  318517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:51:08.422087  318517 out.go:368] Setting JSON to false
	I1120 20:51:08.422132  318517 mustload.go:66] Loading cluster: ha-922218
	I1120 20:51:08.422243  318517 notify.go:221] Checking for updates...
	I1120 20:51:08.422611  318517 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:51:08.422635  318517 status.go:174] checking status of ha-922218 ...
	I1120 20:51:08.423103  318517 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:51:08.442806  318517 status.go:371] ha-922218 host status = "Running" (err=<nil>)
	I1120 20:51:08.442856  318517 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:51:08.443211  318517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218
	I1120 20:51:08.464394  318517 host.go:66] Checking if "ha-922218" exists ...
	I1120 20:51:08.464760  318517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:51:08.464847  318517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218
	I1120 20:51:08.484152  318517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218/id_rsa Username:docker}
	I1120 20:51:08.580077  318517 ssh_runner.go:195] Run: systemctl --version
	I1120 20:51:08.586555  318517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:51:08.599538  318517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 20:51:08.663433  318517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-20 20:51:08.65278164 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 20:51:08.663960  318517 kubeconfig.go:125] found "ha-922218" server: "https://192.168.49.254:8443"
	I1120 20:51:08.663991  318517 api_server.go:166] Checking apiserver status ...
	I1120 20:51:08.664024  318517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:51:08.677878  318517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1285/cgroup
	W1120 20:51:08.686493  318517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1285/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:51:08.686561  318517 ssh_runner.go:195] Run: ls
	I1120 20:51:08.690697  318517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 20:51:08.694773  318517 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 20:51:08.694797  318517 status.go:463] ha-922218 apiserver status = Running (err=<nil>)
	I1120 20:51:08.694809  318517 status.go:176] ha-922218 status: &{Name:ha-922218 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:51:08.694829  318517 status.go:174] checking status of ha-922218-m02 ...
	I1120 20:51:08.695057  318517 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:51:08.713615  318517 status.go:371] ha-922218-m02 host status = "Stopped" (err=<nil>)
	I1120 20:51:08.713640  318517 status.go:384] host is not running, skipping remaining checks
	I1120 20:51:08.713647  318517 status.go:176] ha-922218-m02 status: &{Name:ha-922218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:51:08.713668  318517 status.go:174] checking status of ha-922218-m03 ...
	I1120 20:51:08.713898  318517 cli_runner.go:164] Run: docker container inspect ha-922218-m03 --format={{.State.Status}}
	I1120 20:51:08.733128  318517 status.go:371] ha-922218-m03 host status = "Running" (err=<nil>)
	I1120 20:51:08.733162  318517 host.go:66] Checking if "ha-922218-m03" exists ...
	I1120 20:51:08.733426  318517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m03
	I1120 20:51:08.751682  318517 host.go:66] Checking if "ha-922218-m03" exists ...
	I1120 20:51:08.752030  318517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:51:08.752087  318517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m03
	I1120 20:51:08.771298  318517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m03/id_rsa Username:docker}
	I1120 20:51:08.867446  318517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:51:08.880655  318517 kubeconfig.go:125] found "ha-922218" server: "https://192.168.49.254:8443"
	I1120 20:51:08.880700  318517 api_server.go:166] Checking apiserver status ...
	I1120 20:51:08.880740  318517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 20:51:08.892314  318517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W1120 20:51:08.901282  318517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 20:51:08.901339  318517 ssh_runner.go:195] Run: ls
	I1120 20:51:08.905120  318517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1120 20:51:08.909177  318517 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1120 20:51:08.909203  318517 status.go:463] ha-922218-m03 apiserver status = Running (err=<nil>)
	I1120 20:51:08.909235  318517 status.go:176] ha-922218-m03 status: &{Name:ha-922218-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:51:08.909261  318517 status.go:174] checking status of ha-922218-m04 ...
	I1120 20:51:08.909504  318517 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:51:08.928955  318517 status.go:371] ha-922218-m04 host status = "Running" (err=<nil>)
	I1120 20:51:08.928984  318517 host.go:66] Checking if "ha-922218-m04" exists ...
	I1120 20:51:08.929255  318517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-922218-m04
	I1120 20:51:08.949303  318517 host.go:66] Checking if "ha-922218-m04" exists ...
	I1120 20:51:08.949619  318517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 20:51:08.949670  318517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-922218-m04
	I1120 20:51:08.968692  318517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/ha-922218-m04/id_rsa Username:docker}
	I1120 20:51:09.061734  318517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 20:51:09.074760  318517 status.go:176] ha-922218-m04 status: &{Name:ha-922218-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 node start m02 --alsologtostderr -v 5: (7.992242534s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 node delete m03 --alsologtostderr -v 5: (9.749892749s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 stop --alsologtostderr -v 5: (38.183963366s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5: exit status 7 (121.329447ms)

                                                
                                                
-- stdout --
	ha-922218
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-922218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-922218-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 20:59:15.808948  334955 out.go:360] Setting OutFile to fd 1 ...
	I1120 20:59:15.809084  334955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:59:15.809093  334955 out.go:374] Setting ErrFile to fd 2...
	I1120 20:59:15.809097  334955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 20:59:15.809344  334955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 20:59:15.809519  334955 out.go:368] Setting JSON to false
	I1120 20:59:15.809555  334955 mustload.go:66] Loading cluster: ha-922218
	I1120 20:59:15.809678  334955 notify.go:221] Checking for updates...
	I1120 20:59:15.810068  334955 config.go:182] Loaded profile config "ha-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 20:59:15.810090  334955 status.go:174] checking status of ha-922218 ...
	I1120 20:59:15.810649  334955 cli_runner.go:164] Run: docker container inspect ha-922218 --format={{.State.Status}}
	I1120 20:59:15.830115  334955 status.go:371] ha-922218 host status = "Stopped" (err=<nil>)
	I1120 20:59:15.830154  334955 status.go:384] host is not running, skipping remaining checks
	I1120 20:59:15.830163  334955 status.go:176] ha-922218 status: &{Name:ha-922218 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:59:15.830193  334955 status.go:174] checking status of ha-922218-m02 ...
	I1120 20:59:15.830492  334955 cli_runner.go:164] Run: docker container inspect ha-922218-m02 --format={{.State.Status}}
	I1120 20:59:15.848619  334955 status.go:371] ha-922218-m02 host status = "Stopped" (err=<nil>)
	I1120 20:59:15.848654  334955 status.go:384] host is not running, skipping remaining checks
	I1120 20:59:15.848664  334955 status.go:176] ha-922218-m02 status: &{Name:ha-922218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 20:59:15.848703  334955 status.go:174] checking status of ha-922218-m04 ...
	I1120 20:59:15.849096  334955 cli_runner.go:164] Run: docker container inspect ha-922218-m04 --format={{.State.Status}}
	I1120 20:59:15.867750  334955 status.go:371] ha-922218-m04 host status = "Stopped" (err=<nil>)
	I1120 20:59:15.867775  334955 status.go:384] host is not running, skipping remaining checks
	I1120 20:59:15.867781  334955 status.go:176] ha-922218-m04 status: &{Name:ha-922218-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (54.399390971s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-922218 node add --control-plane --alsologtostderr -v 5: (41.026861124s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-922218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-251329 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-251329 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.488224938s)
--- PASS: TestJSONOutput/start/Command (38.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.24s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-251329 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-251329 --output=json --user=testUser: (6.238060943s)
--- PASS: TestJSONOutput/stop/Command (6.24s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-354194 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-354194 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.300738ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71ef1768-22bd-42a2-9f09-4a1abd0b4e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-354194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"849b6bde-93e0-4cfc-8119-4d47ab5314e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"2f886ca5-7c60-48c9-9fc6-f7e36a8c305e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"02c3df5b-1150-42d5-bd86-85586effe3d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig"}}
	{"specversion":"1.0","id":"7ec23236-e6a1-42b3-82f8-bbd975ea2b7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube"}}
	{"specversion":"1.0","id":"e3907db2-b357-44df-82ff-c4b2be358362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e1d705b9-2ff4-4d27-b0a3-d5f9fdc87ae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a9564a10-62cd-4143-bd1c-bca50afa30df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-354194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-354194
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-712297 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-712297 --network=: (34.150768946s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-712297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-712297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-712297: (2.171337323s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-963952 --network=bridge
E1120 21:02:40.580401  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-963952 --network=bridge: (22.14299018s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-963952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-963952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-963952: (2.044002942s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.21s)

                                                
                                    
x
+
TestKicExistingNetwork (27.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1120 21:02:59.897707  254094 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1120 21:02:59.915800  254094 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1120 21:02:59.915876  254094 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1120 21:02:59.915903  254094 cli_runner.go:164] Run: docker network inspect existing-network
W1120 21:02:59.933116  254094 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1120 21:02:59.933151  254094 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1120 21:02:59.933167  254094 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1120 21:02:59.933326  254094 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1120 21:02:59.952776  254094 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-acedad58d8d6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:fd:4c:d6:a5:f2} reservation:<nil>}
I1120 21:02:59.953194  254094 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002f5390}
I1120 21:02:59.953247  254094 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1120 21:02:59.953311  254094 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1120 21:03:00.005638  254094 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-664739 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-664739 --network=existing-network: (25.103826361s)
helpers_test.go:175: Cleaning up "existing-network-664739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-664739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-664739: (2.020306044s)
I1120 21:03:27.148931  254094 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.27s)

                                                
                                    
x
+
TestKicCustomSubnet (25.08s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-021913 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-021913 --subnet=192.168.60.0/24: (22.867518203s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-021913 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-021913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-021913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-021913: (2.189258863s)
--- PASS: TestKicCustomSubnet (25.08s)

                                                
                                    
x
+
TestKicStaticIP (24.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-515569 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-515569 --static-ip=192.168.200.200: (22.596174223s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-515569 ip
helpers_test.go:175: Cleaning up "static-ip-515569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-515569
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-515569: (2.194289117s)
--- PASS: TestKicStaticIP (24.95s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-744539 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-744539 --driver=docker  --container-runtime=crio: (21.819068178s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-747161 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-747161 --driver=docker  --container-runtime=crio: (20.662401258s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-744539
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-747161
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-747161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-747161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-747161: (2.353572505s)
helpers_test.go:175: Cleaning up "first-744539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-744539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-744539: (2.349612988s)
--- PASS: TestMinikubeProfile (48.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-240181 --memory=3072 --mount-string /tmp/TestMountStartserial1564395827/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-240181 --memory=3072 --mount-string /tmp/TestMountStartserial1564395827/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.876393643s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-240181 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-257268 --memory=3072 --mount-string /tmp/TestMountStartserial1564395827/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-257268 --memory=3072 --mount-string /tmp/TestMountStartserial1564395827/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.834197075s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-257268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-240181 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-240181 --alsologtostderr -v=5: (1.694580535s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-257268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-257268
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-257268: (1.259939286s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-257268
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-257268: (7.199345524s)
--- PASS: TestMountStart/serial/RestartStopped (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-257268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535509 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535509 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.318585089s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-535509 -- rollout status deployment/busybox: (3.206640634s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-bfkjb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-ntxjj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-bfkjb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-ntxjj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-bfkjb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-ntxjj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-bfkjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-bfkjb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-ntxjj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-535509 -- exec busybox-7b57f96db7-ntxjj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-535509 -v=5 --alsologtostderr
E1120 21:06:46.755532  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-535509 -v=5 --alsologtostderr: (22.104508817s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-535509 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp testdata/cp-test.txt multinode-535509:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2150096177/001/cp-test_multinode-535509.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509:/home/docker/cp-test.txt multinode-535509-m02:/home/docker/cp-test_multinode-535509_multinode-535509-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m02 "sudo cat /home/docker/cp-test_multinode-535509_multinode-535509-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509:/home/docker/cp-test.txt multinode-535509-m03:/home/docker/cp-test_multinode-535509_multinode-535509-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m03 "sudo cat /home/docker/cp-test_multinode-535509_multinode-535509-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp testdata/cp-test.txt multinode-535509-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2150096177/001/cp-test_multinode-535509-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509-m02:/home/docker/cp-test.txt multinode-535509:/home/docker/cp-test_multinode-535509-m02_multinode-535509.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509 "sudo cat /home/docker/cp-test_multinode-535509-m02_multinode-535509.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509-m02:/home/docker/cp-test.txt multinode-535509-m03:/home/docker/cp-test_multinode-535509-m02_multinode-535509-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m03 "sudo cat /home/docker/cp-test_multinode-535509-m02_multinode-535509-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp testdata/cp-test.txt multinode-535509-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2150096177/001/cp-test_multinode-535509-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509-m03:/home/docker/cp-test.txt multinode-535509:/home/docker/cp-test_multinode-535509-m03_multinode-535509.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509 "sudo cat /home/docker/cp-test_multinode-535509-m03_multinode-535509.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 cp multinode-535509-m03:/home/docker/cp-test.txt multinode-535509-m02:/home/docker/cp-test_multinode-535509-m03_multinode-535509-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 ssh -n multinode-535509-m02 "sudo cat /home/docker/cp-test_multinode-535509-m03_multinode-535509-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-535509 node stop m03: (1.268534349s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535509 status: exit status 7 (503.430261ms)

                                                
                                                
-- stdout --
	multinode-535509
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-535509-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-535509-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr: exit status 7 (512.594533ms)

                                                
                                                
-- stdout --
	multinode-535509
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-535509-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-535509-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:07:20.066092  395361 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:07:20.066352  395361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:07:20.066362  395361 out.go:374] Setting ErrFile to fd 2...
	I1120 21:07:20.066367  395361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:07:20.066572  395361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:07:20.066750  395361 out.go:368] Setting JSON to false
	I1120 21:07:20.066784  395361 mustload.go:66] Loading cluster: multinode-535509
	I1120 21:07:20.066880  395361 notify.go:221] Checking for updates...
	I1120 21:07:20.067151  395361 config.go:182] Loaded profile config "multinode-535509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:07:20.067166  395361 status.go:174] checking status of multinode-535509 ...
	I1120 21:07:20.067683  395361 cli_runner.go:164] Run: docker container inspect multinode-535509 --format={{.State.Status}}
	I1120 21:07:20.086426  395361 status.go:371] multinode-535509 host status = "Running" (err=<nil>)
	I1120 21:07:20.086452  395361 host.go:66] Checking if "multinode-535509" exists ...
	I1120 21:07:20.086700  395361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-535509
	I1120 21:07:20.105171  395361 host.go:66] Checking if "multinode-535509" exists ...
	I1120 21:07:20.105462  395361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:07:20.105504  395361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-535509
	I1120 21:07:20.125444  395361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/multinode-535509/id_rsa Username:docker}
	I1120 21:07:20.219200  395361 ssh_runner.go:195] Run: systemctl --version
	I1120 21:07:20.225524  395361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:07:20.237837  395361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:07:20.299770  395361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-20 21:07:20.288437461 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:07:20.300463  395361 kubeconfig.go:125] found "multinode-535509" server: "https://192.168.67.2:8443"
	I1120 21:07:20.300501  395361 api_server.go:166] Checking apiserver status ...
	I1120 21:07:20.300537  395361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1120 21:07:20.312147  395361 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1260/cgroup
	W1120 21:07:20.320332  395361 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1260/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1120 21:07:20.320374  395361 ssh_runner.go:195] Run: ls
	I1120 21:07:20.323848  395361 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1120 21:07:20.328012  395361 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1120 21:07:20.328041  395361 status.go:463] multinode-535509 apiserver status = Running (err=<nil>)
	I1120 21:07:20.328055  395361 status.go:176] multinode-535509 status: &{Name:multinode-535509 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:07:20.328081  395361 status.go:174] checking status of multinode-535509-m02 ...
	I1120 21:07:20.328354  395361 cli_runner.go:164] Run: docker container inspect multinode-535509-m02 --format={{.State.Status}}
	I1120 21:07:20.346731  395361 status.go:371] multinode-535509-m02 host status = "Running" (err=<nil>)
	I1120 21:07:20.346760  395361 host.go:66] Checking if "multinode-535509-m02" exists ...
	I1120 21:07:20.347602  395361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-535509-m02
	I1120 21:07:20.367872  395361 host.go:66] Checking if "multinode-535509-m02" exists ...
	I1120 21:07:20.368154  395361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1120 21:07:20.368203  395361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-535509-m02
	I1120 21:07:20.388897  395361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21923-250580/.minikube/machines/multinode-535509-m02/id_rsa Username:docker}
	I1120 21:07:20.482787  395361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1120 21:07:20.496805  395361 status.go:176] multinode-535509-m02 status: &{Name:multinode-535509-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:07:20.496865  395361 status.go:174] checking status of multinode-535509-m03 ...
	I1120 21:07:20.497124  395361 cli_runner.go:164] Run: docker container inspect multinode-535509-m03 --format={{.State.Status}}
	I1120 21:07:20.517110  395361 status.go:371] multinode-535509-m03 host status = "Stopped" (err=<nil>)
	I1120 21:07:20.517145  395361 status.go:384] host is not running, skipping remaining checks
	I1120 21:07:20.517152  395361 status.go:176] multinode-535509-m03 status: &{Name:multinode-535509-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-535509 node start m03 -v=5 --alsologtostderr: (6.857902619s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-535509
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-535509
E1120 21:07:40.579387  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-535509: (30.001974718s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535509 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535509 --wait=true -v=5 --alsologtostderr: (47.000230198s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-535509
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-535509 node delete m03: (4.659626354s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 stop
E1120 21:09:03.646938  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-535509 stop: (30.253802367s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535509 status: exit status 7 (101.94703ms)

                                                
                                                
-- stdout --
	multinode-535509
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-535509-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr: exit status 7 (104.427548ms)

                                                
                                                
-- stdout --
	multinode-535509
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-535509-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:09:20.912670  405254 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:09:20.912796  405254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:09:20.912804  405254 out.go:374] Setting ErrFile to fd 2...
	I1120 21:09:20.912811  405254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:09:20.913042  405254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:09:20.913205  405254 out.go:368] Setting JSON to false
	I1120 21:09:20.913248  405254 mustload.go:66] Loading cluster: multinode-535509
	I1120 21:09:20.913378  405254 notify.go:221] Checking for updates...
	I1120 21:09:20.913623  405254 config.go:182] Loaded profile config "multinode-535509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:09:20.913648  405254 status.go:174] checking status of multinode-535509 ...
	I1120 21:09:20.914102  405254 cli_runner.go:164] Run: docker container inspect multinode-535509 --format={{.State.Status}}
	I1120 21:09:20.933697  405254 status.go:371] multinode-535509 host status = "Stopped" (err=<nil>)
	I1120 21:09:20.933747  405254 status.go:384] host is not running, skipping remaining checks
	I1120 21:09:20.933754  405254 status.go:176] multinode-535509 status: &{Name:multinode-535509 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1120 21:09:20.933789  405254 status.go:174] checking status of multinode-535509-m02 ...
	I1120 21:09:20.934044  405254 cli_runner.go:164] Run: docker container inspect multinode-535509-m02 --format={{.State.Status}}
	I1120 21:09:20.955166  405254 status.go:371] multinode-535509-m02 host status = "Stopped" (err=<nil>)
	I1120 21:09:20.955194  405254 status.go:384] host is not running, skipping remaining checks
	I1120 21:09:20.955204  405254 status.go:176] multinode-535509-m02 status: &{Name:multinode-535509-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535509 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1120 21:09:49.824638  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535509 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (44.897893062s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-535509 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-535509
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535509-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-535509-m02 --driver=docker  --container-runtime=crio: exit status 14 (82.497046ms)

                                                
                                                
-- stdout --
	* [multinode-535509-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-535509-m02' is duplicated with machine name 'multinode-535509-m02' in profile 'multinode-535509'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-535509-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-535509-m03 --driver=docker  --container-runtime=crio: (20.848384276s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-535509
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-535509: exit status 80 (288.50927ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-535509 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-535509-m03 already exists in multinode-535509-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-535509-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-535509-m03: (2.373106326s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.66s)

                                                
                                    
x
+
TestPreload (116.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-088502 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-088502 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (50.645984255s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-088502 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-088502 image pull gcr.io/k8s-minikube/busybox: (2.171202547s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-088502
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-088502: (5.992571421s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-088502 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1120 21:11:46.757421  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-088502 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.5698255s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-088502 image list
helpers_test.go:175: Cleaning up "test-preload-088502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-088502
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-088502: (2.395592563s)
--- PASS: TestPreload (116.01s)

                                                
                                    
x
+
TestScheduledStopUnix (97.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-734114 --memory=3072 --driver=docker  --container-runtime=crio
E1120 21:12:40.578947  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-734114 --memory=3072 --driver=docker  --container-runtime=crio: (20.928411087s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-734114 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 21:12:51.318242  422310 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:12:51.318344  422310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:51.318348  422310 out.go:374] Setting ErrFile to fd 2...
	I1120 21:12:51.318352  422310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:51.318533  422310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:12:51.318755  422310 out.go:368] Setting JSON to false
	I1120 21:12:51.318852  422310 mustload.go:66] Loading cluster: scheduled-stop-734114
	I1120 21:12:51.319247  422310 config.go:182] Loaded profile config "scheduled-stop-734114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:12:51.319338  422310 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/config.json ...
	I1120 21:12:51.319541  422310 mustload.go:66] Loading cluster: scheduled-stop-734114
	I1120 21:12:51.319669  422310 config.go:182] Loaded profile config "scheduled-stop-734114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-734114 -n scheduled-stop-734114
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-734114 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 21:12:51.728404  422458 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:12:51.728648  422458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:51.728656  422458 out.go:374] Setting ErrFile to fd 2...
	I1120 21:12:51.728660  422458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:12:51.728847  422458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:12:51.729071  422458 out.go:368] Setting JSON to false
	I1120 21:12:51.729274  422458 daemonize_unix.go:73] killing process 422345 as it is an old scheduled stop
	I1120 21:12:51.729396  422458 mustload.go:66] Loading cluster: scheduled-stop-734114
	I1120 21:12:51.729822  422458 config.go:182] Loaded profile config "scheduled-stop-734114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:12:51.729909  422458 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/config.json ...
	I1120 21:12:51.730147  422458 mustload.go:66] Loading cluster: scheduled-stop-734114
	I1120 21:12:51.730310  422458 config.go:182] Loaded profile config "scheduled-stop-734114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1120 21:12:51.736098  254094 retry.go:31] will retry after 110.248µs: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.737270  254094 retry.go:31] will retry after 92.83µs: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.738432  254094 retry.go:31] will retry after 318.788µs: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.739591  254094 retry.go:31] will retry after 174.542µs: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.740753  254094 retry.go:31] will retry after 520.068µs: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.741917  254094 retry.go:31] will retry after 874.338µs: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.743066  254094 retry.go:31] will retry after 1.102304ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.745271  254094 retry.go:31] will retry after 1.138407ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.747491  254094 retry.go:31] will retry after 3.130508ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.751702  254094 retry.go:31] will retry after 5.138917ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.757935  254094 retry.go:31] will retry after 7.431272ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.766206  254094 retry.go:31] will retry after 11.219024ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.778485  254094 retry.go:31] will retry after 16.763862ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.795745  254094 retry.go:31] will retry after 22.062825ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.817943  254094 retry.go:31] will retry after 18.792214ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
I1120 21:12:51.837304  254094 retry.go:31] will retry after 45.012245ms: open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-734114 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-734114 -n scheduled-stop-734114
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-734114
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-734114 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1120 21:13:17.663535  423120 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:13:17.663797  423120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:13:17.663806  423120 out.go:374] Setting ErrFile to fd 2...
	I1120 21:13:17.663809  423120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:13:17.664010  423120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:13:17.664256  423120 out.go:368] Setting JSON to false
	I1120 21:13:17.664339  423120 mustload.go:66] Loading cluster: scheduled-stop-734114
	I1120 21:13:17.664671  423120 config.go:182] Loaded profile config "scheduled-stop-734114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:13:17.664734  423120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/scheduled-stop-734114/config.json ...
	I1120 21:13:17.664925  423120 mustload.go:66] Loading cluster: scheduled-stop-734114
	I1120 21:13:17.665023  423120 config.go:182] Loaded profile config "scheduled-stop-734114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-734114
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-734114: exit status 7 (86.599861ms)

                                                
                                                
-- stdout --
	scheduled-stop-734114
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-734114 -n scheduled-stop-734114
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-734114 -n scheduled-stop-734114: exit status 7 (80.238642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-734114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-734114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-734114: (4.834943768s)
--- PASS: TestScheduledStopUnix (97.35s)

                                                
                                    
x
+
TestInsufficientStorage (12.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-063724 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-063724 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.948129336s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f97df8b4-de10-4aa1-8667-f7ff74658dd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-063724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"27d9d8a3-7278-4d12-9b44-c595b250b8c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21923"}}
	{"specversion":"1.0","id":"46f374a3-bd5e-4ac9-83d8-c9e83ca82d96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80f15af8-79cf-4064-8379-7e8e1f455aa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig"}}
	{"specversion":"1.0","id":"4421f7bb-e4d7-41fe-8b81-7c44f36a08f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube"}}
	{"specversion":"1.0","id":"432ea907-c830-484d-8cda-87e679dc32e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"edf7f7e7-f31b-4a17-a22a-fdf7fbeab797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c0dd485b-b912-4b69-abc4-23767e1d8cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b3eaeb64-27d4-4fa0-9621-92c13687588b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8e0c6146-0e97-4bb4-bffb-cdcef2c952f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e863723-1906-4deb-a471-b70558603dbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"761ec7cf-ddf8-469d-b418-f68e444807a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-063724\" primary control-plane node in \"insufficient-storage-063724\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"62650238-46f3-43f8-be66-ec5d1152466e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b894a18-0ef8-492a-a2eb-1c2047bad017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"22d4ecf1-8499-4bdf-8f23-0283a39d2660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-063724 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-063724 --output=json --layout=cluster: exit status 7 (298.759786ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-063724","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-063724","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 21:14:17.905899  425660 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-063724" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-063724 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-063724 --output=json --layout=cluster: exit status 7 (301.400635ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-063724","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-063724","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1120 21:14:18.208022  425769 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-063724" does not appear in /home/jenkins/minikube-integration/21923-250580/kubeconfig
	E1120 21:14:18.219439  425769 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/insufficient-storage-063724/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-063724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-063724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-063724: (1.920791012s)
--- PASS: TestInsufficientStorage (12.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1094521484 start -p running-upgrade-522904 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1094521484 start -p running-upgrade-522904 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.979998509s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-522904 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1120 21:16:46.755639  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-522904 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.945165456s)
helpers_test.go:175: Cleaning up "running-upgrade-522904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-522904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-522904: (2.02271448s)
--- PASS: TestRunningBinaryUpgrade (70.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (312.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.342542059s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-149367
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-149367: (5.069616009s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-149367 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-149367 status --format={{.Host}}: exit status 7 (100.040789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.285239661s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-149367 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (112.860356ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-149367] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-149367
	    minikube start -p kubernetes-upgrade-149367 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1493672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-149367 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-149367 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.74977037s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-149367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-149367
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-149367: (3.160939707s)
--- PASS: TestKubernetesUpgrade (312.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.06s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2376545531 start -p missing-upgrade-328765 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2376545531 start -p missing-upgrade-328765 --memory=3072 --driver=docker  --container-runtime=crio: (1m8.64729445s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-328765
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-328765: (1.774157352s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-328765
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-328765 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-328765 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.7217969s)
helpers_test.go:175: Cleaning up "missing-upgrade-328765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-328765
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-328765: (2.448990149s)
--- PASS: TestMissingContainerUpgrade (120.06s)

                                                
                                    
x
+
TestPause/serial/Start (49.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-643572 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-643572 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.407119875s)
--- PASS: TestPause/serial/Start (49.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806709 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-806709 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (104.621345ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-806709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806709 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806709 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.340170337s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-806709 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-936763 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-936763 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (1.787618748s)

                                                
                                                
-- stdout --
	* [false-936763] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21923
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1120 21:14:26.818260  428102 out.go:360] Setting OutFile to fd 1 ...
	I1120 21:14:26.818685  428102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:26.818697  428102 out.go:374] Setting ErrFile to fd 2...
	I1120 21:14:26.818704  428102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1120 21:14:26.819049  428102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21923-250580/.minikube/bin
	I1120 21:14:26.819704  428102 out.go:368] Setting JSON to false
	I1120 21:14:26.820698  428102 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":14209,"bootTime":1763659058,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1120 21:14:26.820818  428102 start.go:143] virtualization: kvm guest
	I1120 21:14:26.912443  428102 out.go:179] * [false-936763] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1120 21:14:26.979265  428102 notify.go:221] Checking for updates...
	I1120 21:14:26.982624  428102 out.go:179]   - MINIKUBE_LOCATION=21923
	I1120 21:14:27.106797  428102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1120 21:14:27.216281  428102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21923-250580/kubeconfig
	I1120 21:14:27.348276  428102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21923-250580/.minikube
	I1120 21:14:27.504823  428102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1120 21:14:27.569979  428102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1120 21:14:27.629068  428102 config.go:182] Loaded profile config "NoKubernetes-806709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:27.629205  428102 config.go:182] Loaded profile config "offline-crio-735987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:27.629296  428102 config.go:182] Loaded profile config "pause-643572": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1120 21:14:27.629388  428102 driver.go:422] Setting default libvirt URI to qemu:///system
	I1120 21:14:27.652774  428102 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1120 21:14:27.652895  428102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1120 21:14:27.709901  428102 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-20 21:14:27.699924115 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1120 21:14:27.710012  428102 docker.go:319] overlay module found
	I1120 21:14:27.913911  428102 out.go:179] * Using the docker driver based on user configuration
	I1120 21:14:28.068738  428102 start.go:309] selected driver: docker
	I1120 21:14:28.068775  428102 start.go:930] validating driver "docker" against <nil>
	I1120 21:14:28.068794  428102 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1120 21:14:28.130834  428102 out.go:203] 
	W1120 21:14:28.294545  428102 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1120 21:14:28.358389  428102 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-936763 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-936763" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-936763

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-936763"

                                                
                                                
----------------------- debugLogs end: false-936763 [took: 5.30557053s] --------------------------------
helpers_test.go:175: Cleaning up "false-936763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-936763
--- PASS: TestNetworkPlugins/group/false (7.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.309544092s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-806709 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-806709 status -o json: exit status 2 (333.828309ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-806709","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-806709
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-806709: (2.098001141s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806709 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.571194718s)
--- PASS: TestNoKubernetes/serial/Start (9.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-643572 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-643572 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.10994911s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21923-250580/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-806709 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-806709 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.586534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (1.079005626s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.057767946s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-806709
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-806709: (1.331393111s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806709 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806709 --driver=docker  --container-runtime=crio: (10.057529631s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-806709 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-806709 "sudo systemctl is-active --quiet service kubelet": exit status 1 (378.775372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (41.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.535070994 start -p stopped-upgrade-933740 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.535070994 start -p stopped-upgrade-933740 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.189531632s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.535070994 -p stopped-upgrade-933740 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.535070994 -p stopped-upgrade-933740 stop: (4.063404372s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-933740 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1120 21:17:40.578976  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-933740 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.391453433s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (41.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.422069618s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-933740
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-933740: (1.10938227s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.632115952s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-936763 "pgrep -a kubelet"
I1120 21:18:15.228847  254094 config.go:182] Loaded profile config "auto-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f5kb6" [0daf35aa-31a0-4a7d-a14a-caf6243decd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f5kb6" [0daf35aa-31a0-4a7d-a14a-caf6243decd7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004055413s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mc426" [4ad78936-5c82-4f91-a53d-447220f2b311] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00396375s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.769484586s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-936763 "pgrep -a kubelet"
I1120 21:18:45.650469  254094 config.go:182] Loaded profile config "kindnet-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6tp2f" [42aa26b8-8047-4bf3-a658-9ff0877b3850] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6tp2f" [42aa26b8-8047-4bf3-a658-9ff0877b3850] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003553504s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (40.861694734s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.056819029s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-936763 "pgrep -a kubelet"
I1120 21:19:30.432735  254094 config.go:182] Loaded profile config "enable-default-cni-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d9f2q" [57335adc-5a68-4f6c-8a42-aa1fc971e2de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d9f2q" [57335adc-5a68-4f6c-8a42-aa1fc971e2de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.005245282s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fn2vm" [f9365edb-c214-4e6e-9079-d62c5b375bf5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00352442s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-936763 "pgrep -a kubelet"
I1120 21:19:42.053855  254094 config.go:182] Loaded profile config "flannel-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wq485" [835b3796-dc97-44f0-8730-d94bf03b34b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wq485" [835b3796-dc97-44f0-8730-d94bf03b34b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004583341s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.815928836s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-936763 "pgrep -a kubelet"
I1120 21:19:59.712385  254094 config.go:182] Loaded profile config "bridge-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-99cfb" [c4760903-ead5-4a81-b7b0-f7f95cfbf18b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-99cfb" [c4760903-ead5-4a81-b7b0-f7f95cfbf18b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004530522s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-936763 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.068308689s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.381259982s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.024413419s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-c9qgq" [67e04173-8f5a-4170-a260-4223a9c0c487] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-c9qgq" [67e04173-8f5a-4170-a260-4223a9c0c487] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003857351s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-936763 "pgrep -a kubelet"
I1120 21:20:58.714154  254094 config.go:182] Loaded profile config "calico-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wrjgz" [7f01eb72-71df-4377-a1d5-90c4d0cc33f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wrjgz" [7f01eb72-71df-4377-a1d5-90c4d0cc33f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.00473376s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-936763 "pgrep -a kubelet"
I1120 21:21:12.772469  254094 config.go:182] Loaded profile config "custom-flannel-936763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-936763 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-trbm2" [edeeeb47-7f09-4711-a7b5-d3966d35300b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-trbm2" [edeeeb47-7f09-4711-a7b5-d3966d35300b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004562402s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-936763 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-936763 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-936214 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1b53bd6f-5850-4bee-9c34-0ebd759fa96b] Pending
helpers_test.go:352: "busybox" [1b53bd6f-5850-4bee-9c34-0ebd759fa96b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1b53bd6f-5850-4bee-9c34-0ebd759fa96b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004452978s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-936214 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.578469935s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-936214 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-936214 --alsologtostderr -v=3: (16.305447164s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 21:21:46.755102  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/addons-658933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.032775665s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-166874 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2648763c-5822-494b-91d6-789fd9fa6909] Pending
helpers_test.go:352: "busybox" [2648763c-5822-494b-91d6-789fd9fa6909] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2648763c-5822-494b-91d6-789fd9fa6909] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004330549s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-166874 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214: exit status 7 (92.665331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-936214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-936214 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.757021698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-936214 -n old-k8s-version-936214
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-166874 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-166874 --alsologtostderr -v=3: (18.312133315s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-714571 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2f0d580b-0733-4eca-994c-f26f9f207bcc] Pending
helpers_test.go:352: "busybox" [2f0d580b-0733-4eca-994c-f26f9f207bcc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2f0d580b-0733-4eca-994c-f26f9f207bcc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003679629s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-714571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874: exit status 7 (110.216407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-166874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-166874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.637526243s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-166874 -n no-preload-166874
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-714571 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-714571 --alsologtostderr -v=3: (17.035879694s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [02867973-6a01-4a1a-bd7e-194be3d350a6] Pending
helpers_test.go:352: "busybox" [02867973-6a01-4a1a-bd7e-194be3d350a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [02867973-6a01-4a1a-bd7e-194be3d350a6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004109265s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-454524 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-454524 --alsologtostderr -v=3: (16.537317161s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571: exit status 7 (107.781344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-714571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1120 21:22:40.579016  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/functional-041399/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-714571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.93910941s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-714571 -n embed-certs-714571
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-54v92" [172155fb-773b-4a5d-b9d0-9f9043bd4b72] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003273319s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-54v92" [172155fb-773b-4a5d-b9d0-9f9043bd4b72] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004983449s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-936214 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524: exit status 7 (92.606894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-454524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-454524 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.518512902s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-454524 -n default-k8s-diff-port-454524
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-936214 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.679854339s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nljn5" [225fe7df-023a-445c-8552-ee244e518192] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003500975s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nljn5" [225fe7df-023a-445c-8552-ee244e518192] Running
E1120 21:23:15.409153  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:15.415586  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:15.426994  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:15.448995  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:15.490988  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:15.573378  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:15.735673  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:16.057921  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1120 21:23:16.699532  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003310804s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-166874 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-166874 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-km7xn" [74c07910-53db-450e-8569-ca8454ffb12f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00313057s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-km7xn" [74c07910-53db-450e-8569-ca8454ffb12f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003751334s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-714571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-678421 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-678421 --alsologtostderr -v=3: (8.0401473s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-714571 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-psntr" [7b6ddf9d-f15b-465a-89af-d622cce06e01] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003489342s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421: exit status 7 (84.28109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-678421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-678421 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (9.968734739s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-678421 -n newest-cni-678421
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-psntr" [7b6ddf9d-f15b-465a-89af-d622cce06e01] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00374945s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-454524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-454524 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-678421 image list --format=json
E1120 21:23:56.389572  254094 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21923-250580/.minikube/profiles/auto-936763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-936763 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-936763" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-936763

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-936763"

                                                
                                                
----------------------- debugLogs end: kubenet-936763 [took: 5.988268914s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-936763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-936763
--- SKIP: TestNetworkPlugins/group/kubenet (6.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-936763 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-936763" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-936763

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-936763" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-936763"

                                                
                                                
----------------------- debugLogs end: cilium-936763 [took: 4.263055841s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-936763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-936763
--- SKIP: TestNetworkPlugins/group/cilium (4.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-454805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-454805
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard